I´m having an issue with KNIME these past weeks, everything was working fine, but suddenly KNIME platform started crashing without any warning. I re-downloaded and formatted my pc to fix any bug issues but without success. I’m using 20G in the config settings.
@bawh03 welcome to the KNIME forum. Could you give us more details about what you are doing and when this problem might occur. If possible you might create a Logfile in DEBUG mode to give us more insight.
What kind of data operations are you doing, how large is the data compared to your 20GB of RAM - how much is your overall RAM and are there other processes running?
Also concerning performance you might want to read this article:
I have been having a similar issue about 3 times per day on my main desktop PC for the last 3 weeks. I have an AMD Ryzen 9 5950X 16-Core Processor - 3.40 GHz with 128GB of RAM (and 60GB available to KNIME in the knime.ini file), so it is definitely not a resource shortage. Everything just disappears mid execution. No error messages on crash or when I restart. The workflows seem to open back up fine as well, but any open components or metanodes need to be closed and re-opened.
I wasn’t able to find any issues in the log files to point me to the problem, but it definitely started up after a windows update. It typically only costs me a few minutes per day since I am so diligent at saving, so I haven’t dug into it further yet…
Hi @iCFO , hopefully it will stay stable for you. I know you said before that you don’t think yours is resource issue, but I know from my own recent experience that KNIME can vanish mid execution if it runs out of Java heap.
I had a couple of occasions about 2 months ago when I returned to my pc to find KNIME was no longer running and I’d theorised that this could be the cause. Then by chance a few days later I was sat watching my workflow running and the Java heap display on the status bar was showing as getting very close to the max I’d allocated. As it climbed a little further, showing as being practically at the limit,
KNIME performed an instantaneous vanishing trick that would have made David Copperfield proud!
I adjusted my workflow so that it didn’t retain any data on loop iterations where not required and not had the problem since.
It would be nice to get an error message in this situation rather than an application termination but I guess it’s the underlying Java vm crashing and the rug just gets pulled away from underneath KNIME. May not be the problem you were having but thought I’d mention that this can definitely be a cause.
I have noticed something odd that I have read about on the forum before. The visible “total” of my heap space never actually shows the MAX that is assigned in my ini file. (-Xmx90000m)
This was the solution on the other thread, but no clue where he made this adjustment…
I found the cause, there’s a statement at windows system / user environment that I did not put: _JAVA_OPTIONS -Xms512M.
Removing this clear the way. Thanks for everyone’s help. I’m not sure what program put that statement and when does it occur in the first place.
EDIT: I put a bit more stress on the system and it raised the total to 13GB, so I think that I am just misunderstood what that “total” number represents. It does not appear to be the static java heap MAX.
Yes I’ve taken to putting in garbage collection on some of my memory intensive loops now. In theory Java should just handle it but it doesn’t hurt to give it a “hint” that it might be a good time to take out the trash occasionally…
Hi guys! Thanks for replying here and try to help!
@iCFO I re-installed it and I still got the same result. I agree with you, it doesn’t seem to be a memory issue, since the workflow I was trying to run was a simple csv file reader with a joiner and the file had only 0.5Gb.
The weird thing is that KNIME was working fine with other more complex workflows before.
I really think it was something related to a Windows update. What seemed to help was running a memory health check on my pc, but it’s still acting up.
@mlauber71 I tried to run a Logfile in DEBUG mode, but nothing shows up.
@bawh03 is the log file empty? It still might be useful to check it.
Then I re-read your initial log. You might check if you have auto-save of workflow enabled and maybe disable that.
Then indeed you might put cache and garbage collection in front of operations in your workflow that export data or in front of loops.
The next step could be to force nodes to write to disk.
If the problem is with relative small workflows you could try to decrease the memory allocated. If this leads to problems next step could be to go as high as 24 or 26 GB (maybe avoid running other large operations then).
Check your workflow for large intermediate joins or complex loops off they are necessary. If possible you could share your workflow or at least screenshots to give us new ideas.