Reset/Free WF memory

How one can Reset the workflow memory after execution? If i reset the nodes i still see that the Knime memory over the Task Manager still capture the last size they have been used after the end of the WF.
If i re-run the Workflow i would expect the memery of the WF is reset and released.
In order to free teh WF memroy i forced to exit the Knime and restart it again.


Assuming none of the nodes in the workflow have memory leaks, then the memory will only be released when the java virtual machine garbage collector gets round to it. There is some more detail here about when that might be, but the short version is that it won’t simply coincide with resetting a workflow.

There is an unreleased Vernalis node which can force some garbage collection to happen. If you would like to give it a try, then i will see if we can release it.


Dear @s.roughley
Unfortunately, none has got me the answer to my questions. I think there is a bug in Knime in handling memory.
To clarify my question, assume that I have WF that is now finished execution. Now I rest the nodes one by one of the WF. If you look to the WF memory over “Task manager” you will see that the memory is not freed.
Also with the second run, the WF memory is increasing. One expect that memry should be released if you perform rest to the nodes.
I have also tested that with simple WF that read a table and perform a loop. Please see my previouse post
WF memory


So in a simple case, that’s what I would expected. The garbage collector will deal with stuff as and when it needs to - that almost certainly doesnt coincide with resetting nodes - most of the data tables are will be written to disk not stored in memory.

In the other post you refer to, that’s a very complicated workflow, and there is also the possibility that somewhere one of those nodes is leaking memory - without know what is buried in the metanodes it’s very difficult to say what or where that might be. You could try the Benchmark loop start / end (see and in place of the counting loop from the Vernalis plugins (use the ‘memory monitoring’ version) - have a look what happens to both the used memory and the memory the system thinks you are using.


No, this is simply how memory management works in Java and in general in other memory managed run times like in .Net.

Once you start the JVM (Java Virtual Machine), eg knime you can tell it how much RAM it can use (max, min values). This can be done in the knime.ini (more details). If you tell the JVM it can use say 4G of RAM, it will just tend to fill up till that amount and only start clean-up once it goes over it and then also only as much as really needed (in reality is much more complex). this also means you should ensure you actually have enough free memory for what you set Xmx to.

But again this is purely managed by the JVM and the user has by default no control over it and there really is no need to control it. If you think KNIME uses too much memory, lower Xmx but this can impact performance or lead to certain workflows with lots of data to fail due to lack of memory.


Hi @s.roughley,

I’d be very interested in the “unreleased Vernalis node” you talked about. I am repetitively running a complex workflow through call local workflow extension in conjunction with Wait-for-time node. Though, memory isn’t released even by combining with a cache node, closing the workflow or manually executing garbage collection.

Only upon closing Knime memory is freed again. Still trying to collect statistical information about node execution and benchmarking metrics but workflow runs several times w/o any impairment. Kind of odd.



So, I just checked back, and we actually have release permission for this node. I’m aiming for this and the benchmarking loop upgrades at the end of this week.



The nodes have now been release on the nightly build, and will shortly be released on the 3.7 stable branch. See

for further details and announcements.


1 Like