Memory upgrade for KNIME

Hi everyone,

Thanks in advance for taking a look at this post. I seem to be hitting memory issues in every type of workflow I do. Clustering large numbers of molecules, getting JSON info from API’s running recursive loops on large datasets. I’ve hit the ceiling and the most memory currently I have is 30G. The machine can be upgraded to 256GB. will this help can the client use all of it? Anyone else running KNIME (Client) with larger memory allocations?


Have you allocated your machine memory by modifying your knime.ini file? By default I think KNIME only uses half of your available memory. I have 32GB and have let KNIME use up to 26 gigs by modifying the ini file.

And generally, yes, upgrading the amount of memory will help KNIME workflows run quicker, though some of it depends on the workflow and the nodes used.

One way you could test relatively easily is set up a virtual machine with upgraded specs to see if your KNIME workflow runs better with more memory/cores. A lot of times AWS or Azure will provide free credits, so you may be able to check for free.


Hi @Snowy Thanks for the quick reply. Yes I have my ini file maxed and enough so that I actually crash the computer. Re the AWS or Azure, you are referring to the server yes? I’d eventually like to to get the server set up but right now I’m just trying to get my own work done :slight_smile:

1 Like

You can get a virtual machine, install KNIME desktop on it, then run your workflow from there. Although KNIME server has an hourly rate as well, which could make the cost fairly effective.


@Snowy Great to hear. I’ll take a look into it.


You might want to have a look at a collection of discussions about KNIME performance and how to tweak it. More memory will definitely help but there seems to be a threshold in relation to the overall memory of the system (KNIME might not start or freeze if there is not enough memory left for other processes and the system).

KNIME performance

Process 900+ CSV files


@mlauber71 Thanks for the comprehensive view of the different topics. Other than the R loader, I had come across them but it’s always good to refresh. I have no issues in loading the data it’s the later processing and I am especially having issues using a recursive loop. It makes sense as everything is held and iterated on perhaps up to 50-100K times. I’ve chunked where I could but I recently thought of an idea. I was thinking instead of using a recursive loop, trying to find a way to run a regular loop or perhaps a recursive number of loops and write the file after each iteration. If I have the file reader with the same file listed 100K times then I’m using the IO over the memory and if it dies I have a partially processed file I can pick up from.

I’m going to see if I can’t implement that.
Thanks again!

Hi there @j_ochoada,

probably should try different logic if possible (without recursive loop) and additionally optimize workflow execution. If you share your workflow design and logic (more details and possibly example) maybe community will have some ideas that help you speed it up and avoid memory problems :wink:


1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.