knime holding memory while not running any job..from long time.

pleas look at the screenshot. and suggest way out. not releasing the memory after job run.
I set knime.server.executor.max_lifetime=2m

any other option…

i have server on aws EC2 ubuntu
32gb memory
8 cores

By default the jvm won’t release already allocated memory. Does this lead to problems in your setup?

Setting knime.server.executor.max_lifetime=2m won’t really help here. This setting shuts down your executor every two minutes and starts a new one. In case you have some long running workflows you might even end up in a situation with several executors up and running, which leads to an even higher overall memory consumption.

com.knime.server.job.default_swap_timeout= might be more suitable here, as this specifies after how much time a job will be swapped to disk. However, this is set to 1 minute per default and as I said earlier, even though the job gets swapped to disk the jvm won’t set free already allocated memory.

If the release of unused memory is important for your setup, you might want to consider switching to a shrinking garbage collection strategy. This is a good read explaining the different options available:

I assumed Setting knime.server.executor.max_lifetime=2m will release memory after job execution.
so what should be - Setting knime.server.executor.max_lifetime= ?
and what should be com.knime.server.job.default_swap_timeout= ??
my problem is that once the job is executed successfully memory should get a release
for the next job…
I have a scheduled job with a difference of 5min.

if memory is not free by the currently executed job then the next job stuck. and everything freeze.
to a solution that I have set up auto-reboot on cloud watch if memory utilization is 100% then reboot. anyway, i don’t have any other solution.

knime.server.executor.max_lifetime= defines after how much time an executor will be shutdown and a new one will be started. It defaults to 1 day, but with newer releases we turned off so called executor rotation completely.

If you leave com.knime.server.job.default_swap_timeout= empty, it uses the default of 1m which should be sufficient here.

I think what you are seeing from the monitoring is the reserved memory of the jvm. The only way to release reserved memory is a change in garbage collection strategy.

I have never defined any garbage collection strategy.

can you plz guide or help me with that.

I have never defined any garbage collection strategy.
can you plz guide or help me with that.

DUe to hight memory utilizations , EC2 stuck ( stop working) ( freeze ) - hence i set auto reboot for such situations.
this is happening frequently now days. no heavey jobs.

This is likely due to the setting knime.server.executor.max_lifetime=2m. I’d recommend to disable the executor rotation completely or set it to 1d. Restarting your executor every 2 minutes can be one explanation that your instance is so busy.

Regarding changes to GC strategy I can’t tell which would be the best in your situation. If you want the jvm to release memory (why again would this be needed in this case?), you would need to change it to a shrinking GC strategy. The link I’ve posted above gives some good orientation on that topic.

Have you tried a garbage collection node?

if my no workflow (job) is running then…
nothing is ruuning still…

As Marten pointed out, Java will usually not release memory once it has claimed it from the operating system. However, this does not imply that the memory is actually being used. Java will re-use claimed memory for new objects. Therefore unless you are seeing OutOfMemoryErrors in the executor, everything is fine. If your system runs out of memory, try decreasing the heap space available to the executor in the knime.ini. Java usually uses ~25% more operating system memory than the specified heap size. Therefore for a system with 32GB memory I suggest using ~20GB heap size for the executor. And turn off executor rotation as Marten suggested. It’s only useful in very rare cases and causes other problems (such as increased memory consumption).

1 Like