Hi
I have a WF that run 100 iterations where at each iteration that same process is excuted. At the end we collect the results on the loop end. I have choose to save the data in the desk.
I dont know why as the iterations in increasing the WF memory is incresing and after while it frezzes.
A few questions. First, does the workflow successfully complete for a small number of iterations, or does it fail no matter how many there are? Second, are you able to isolate the freeze to a particular node - that is, does it happen at the same place every time you execute? Lastly, have you tried increasing the memory available to KNIME in knime.ini?
The workflow successfully complete for a small number of iterations. However, I have set the Parallel Chunk start node to be 1 then it successfully completes the 100 iterations.
I can now say that the parallel chunk node causes the freezing-just reduce it to 1 it works fine.
Yes, i have set 10 GB.
My question was why it increases the usage of the memory?
Also, i would like to know how to reset the WF memory when it has done?
That’s good to know - I didn’t realize that you were using a parallel chunk loop node. That’s from KNIME Labs, so it’s possible you’ve discovered some unexpected behavior from the node. I’ll check with some of our developers and see what I can find out.
Are you able to share the portion of the workflow that contains the parallel chunk node? I don’t see it in the screenshot above, and a workflow would help troubleshoot.
It’s no surprise that you need more memory if you are processing multiple chunks in parallel. All nodes are essentially duplicated as many times as there are parallel executions.