BUG? Working with Java Snippet

Hello KNIME Community,

I might found a memory leak working with the Java Snippet. During my workflow I have to extract thousands of files where the information is stored binary. That’s why I open them with a Java Snippet sadly after some time the system runs out of memory.

“memory is low. i have no chance to free Memory. this may cause an endless loop”

Attached is an example workflow doing nonsense, but it shows the bug and the general topology of my real workflow. After some time running the heap will increase, it may needs more rows to be visible. If the memory is allocated probably of the snippet there is no chance of setting it free again for me (run garbage collector or even closing the workflow) except restarting KNIME.



Hi Dirk,

I've experienced very similar issues with very long-running workflows with loops. The heap grows (albeit slowly) over days and eventually KNIME will run out-of-memory.

In my case, I do not even collect any rows within those loops, but just execute some action and basically discard all rows/columns after each iteration. On the other hand, I've similarly structured workflows which work just fine and I've not been able to identify the point of failure.

I didn't have much time to investigate this so far (as these issues happen on a regular basis, it's been easier for me just to kill and restart the corresponding workflows). Having a quick look with a profiler revealed, that it's great amounts of char[]s (i.e. Strings) piling up.

Apologies, I can give no closer help, but I wanted to add myself to this thread due to interest.


Hey Philipp,

Thank you for the reply at least now I know that other people have similar problems. And that way the Topic maybe will get more attention.

I also explored that the main part of the problem is Strings blocking the Heap but I don’t think it is supposed to be like that. But my problem is that I want to apply a snippet on about 200k files getting the information and save them in strings. I need to do that because the files contain a double value for the timestamp and an integer for the value until the file ends. I didn’t find another way than do it besides with own code. While I explore that amount of files the heap is already full.



Hi Dirk,

(unfortunately) I've to come back to this thread, as the OOM issues tend to become a really annoying problem now (I don't mind restarting the workflow every day, but I would really love to avoid having to do so during my well deserved holidays). So, I've done some more experiments today to localize the issue.

The good thing, which was a big help in my case: We have two variants of the same workflow (which processes some input in a loop, does some extraction using the HtmlParser and XPath and at the end writes the result to a DB using the database nodes. Nothing too exotic as I would say). The difference: Workflow A retrieves the input data from the web using the REST nodes, Workflow B retrieves the input data from a local database.

The memory issues only appear with Workflow A, whereas Workflow B has a constant memory consumption for hours!

So, I was able to isolate the problem to the GET Request node. The following minimal example will eventually run out of memory (note that I'm using a Variable Loop End node on purpose, so that no data table is collected at the end of the loop):

The Table Creator just inputs a constant URL which is repeatedly retrieved by the GET Request node. Et voilà, here's the memory consumption:

Inspecting the heap dump reveals, that all strings downloaded by GET Request remain in memory. Here's the reference chain (this goes to the developers, I didn't dig any further at this point):

Although you initially assumed the issue being caused by the Java Snippet node, could you double-check, if you're also using the REST nodes?

Best regards,

1 Like

Hello Philip,

first thank you for your effort.

I am sure I didn’t use anything else than the Java Snippet and the Loop. To be completely sure I tested it again with a small workflow building strings with random data. I also didn’t save any data as you did with the Variable Loop. And I was able to see the heap filling up again.


Snippet output:

Sadly I wasn’t being able to analyze the Heap because until now I didn’t figure out how it works. But nevertheless the Heap Bar shows that even after garbage collection the space is occupied.

Best regards,



I would like to post here and say that there is more than one node with a memory leak. I am running a loop for only a few thousand iterations and it slowly fills up the heapspace despite nothing being saved to memory. My tables are small just around 5k rows and 100 columns and every node is set to "write tables to disk". Yet with each iteration, the heapspace grows and the garbage collector doesn't free up more than a few hundred megs. Not to mention, it runs very slowly.





Have you tried to add this line to the Knime.ini file to use the new garbage collector ?




Hello Fabien,

thank you for your input, I did try it right now and it did not had an influence on the leak still running full until nothing works.


Hi All,

Did you find a solution for the memory leakage in the Java Snippet node ?

Thanks in advance,



Sorry for being late to the party. I just stumbled across this very forum post when reading up on this recent forum post that discusses heap space exhaustion in KNIME Analytics Platform 4.0.1.

@Dirk, to reproduce the memory leak you observed, I build a workflow similar to yours. It re-runs the Java Snippet node 100,000 times:

I then forced a full sweep garbage collection, detected current heap space consumption to be at ~119 MB, ran the workflow (in KNIME Analytics Platform 4.0.1), reset the workflow, forced another full sweep garbage collection, and detected current heap space consumption to be at ~120 MB. In other words, I wasn’t able to reproduce a memory leak in the Java Snippet node.

KNIME Analytics Platform 4.0 fixed some UI-related memory leak and also fixed some performance issue with Java Snippet nodes in loops. Maybe the issue has been fixed for you as well?

Another reason for increased memory allocation after execution (and reset) of some nodes could be the Console View. When I ran aforementioned workflow after setting the Console View Log Level to DEBUG or INFO in File -> Preferences -> KNIME -> KNIME GUI, about 70 MB of additional heap space was used up by info messages shown on the Console View.

If the issue persists for you in KNIME 4.0.1 and you could provide me with a minimal workflow with which I can reproduce the issue, I am happy to look into it further.