Memory issues when running from command line

I created a workflow, which runs on my desktop (Mac) perfectly fine with memory limits set to 2048mb. Using the same workflow on a linux box (Ubuntu 64 bits) also with memory limits set to 2048mb keeps crashing with this error message:

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007328e7000, 261550080, 0) failed; error='Cannot allocate memory' (errno=12)

I already tried to add this to the knime.ini:

-Dorg.knime.container.cellsinmemory=0
-Dknime.database.fetchsize=100

Which didn't help. Database is only used to write data to.

Any advise or help appreciated.

Thanks,
Peter

 

The error comes directly from the operating system. It cannot provide more memory to the KNIME process. Do you have enough free memory in the Linux system?

That's probably causing the issue. Is there a way to reduce the needed amount of memory, or at least find out which nodes are causing the most memory consumption?

Did you try setting the memory for KNIME to a smaller value (via knime.ini)? Unless you are building large predictive models or have a very large number of columns you usually don't need much memory.

There is no way to find out which node needs how much memory. However, if KNIME runs out of memory you can find out which node was executing at that time via the log file.

I tried to reduce the memory amount in the ini, which is not helping at all.

I think i got it fixed now, I was making shure the server has enough free memory, so that java can take it.

Now i don't get any error messages anymore, but the workflow still fails with this:

INFO 	 main BatchExecutor	 Workflow execution done Finished in 2 mins, 24 secs (144497ms)
INFO 	 main BatchExecutor	 ========= Workflow did not execute sucessfully ============
Knime:
JVM terminated. Exit code=4

On the way only warnings from nodes I also get in the desktop environment.

Any idea what is causing this?