Execute failed: Java heap space

Hi

i have some problem in running MOE-Energy minimizer. In this i have 23000 rows of molecules.

ERROR     Energy Minimizer     Execute failed: Java heap space

already i have changed -XX:MaxPermSize=512m into =1024 or =512 but still i have same problem.

best regards

NATHAN

Hi Nathan,

You'll need an updated binary file. Please contact CCG Support at support@chemcomp.com.

Best regards

Guido Kirsten

CCG Support

Hi Nathan,

I’m sure Guido will give you the proper fix; but in case it helps in the short-term, I have found that explicitly setting the MOE nodes to write to disk (under Memory Policy) gets round these sorts of issues for me.

Kind regards

James

Hi James

Thanks for your reply and i have managed to run the moe node. 

One again thanks for your prompt reply

Nathan

HI James

Again i had the same problem. after calculating descriptors in MOE, I could not save the work-flow and the message was "Java heap space"

then i have started to develop some classifcation models and it shows following error.

ERROR     Weka Predictor     Unable to clone input data at port 0: Java heap space
ERROR     Weka Predictor     Execute failed: Java heap space
ERROR     Weka Predictor     Execute failed: GC overhead limit exceeded
ERROR     Weka Predictor     Unable to clone input data at port 0: Java heap space
ERROR     Weka Predictor     Unable to clone input data at port 0: Java heap space
ERROR     Weka Predictor     Unable to clone input data at port 0: Java heap space
ERROR     Weka Predictor     Execute failed: GC overhead limit exceeded

I am running KNIME latest version in Ubunto. I am looking forward to see your reply

Best regards

Nathan

Just to be sure: did you increase "-XX:MaxPermSize" or "-Xmx"? Only the latter really gives you more memory, the former is just for adjusting the different memory areas of Java.

HI

my knime.ini file is following lines

-Xmx1099m
-XX:MaxPermSize=512m

 

best regards

Nathan

Hi

i still have memory probĺem. I have 26000 rows and 500 columns. already i have changed KNIME.ini file as following.

-Xmx1099m
-XX:MaxPermSize=512m

following error message i get, when i save or run the work-flow

WARN      AttributeSelectedClassifier 0:2:150:119:91     failed to apply settings: java.lang.OutOfMemoryError: Java heap space

best regards

Nathan

The problem with the Weka integration is that they read the data into memory before starting to process them which is due to the underlying library. In your case, you will need at least 26.000 rows x 500 columns x #bytes. Any chance to run the KNIME feature elimination setup, before running the Weka learning method?

Hi Guys,

i seem to have a “similar” issue while parsing a 400mb json file via the JSONReader node

i have -Xmx12288m but it seems to be capped at 5462mb

–> I also tried : -XX:MaxPermSize=12288m

i have a laptop with 16gb ram, that should be enough, right?

– installation details : knime_3.6.2\plugins/org.knime.binary.jre.win32.x86_64_1.8.0.152-01/jre/bin\server\jvm.dll

any ideas on how to bump the heap space?

thx!

Herman

hi all, I worked around it (for JSON) in the following way. i don’t understand why JSONReader & JSONPath give memory issues like this

anyway, here goes:

use a Java snippet node and configure as follows
a. select json bundle(s) image
b. as output variable(s)


– needs to be explicitly “array of JsonValue” otherwise it fails on the JSONPath step to get the rows
c. in the snippet : custom import

import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;

import java.io.InputStream;
import java.io.FileInputStream;

import javax.json.Json;
import javax.json.JsonReader;
import javax.json.JsonArray;
import javax.json.JsonValue;

d. in the snippet: expression start

InputStream fis = null;
JsonArray json_array = null;

try {
	fis = new FileInputStream( c_Location );
	JsonReader reader = Json.createReader(fis);

	 json_array = (JsonArray) reader.readArray();
	out_json = new JsonValue[json_array.size()];

	for(int i = 0; i < json_array.size(); i++){
	      JsonValue object = json_array.get(i);
	      // now do something with the Object
	      out_json[i] = object;
	}
	
	reader.close();

} catch (IOException e) {
//	e.printStackTrace();
} 

conclusion
-use the ListFiles node and fetch the Location as input for the Java Snippet (create chunked loops if needed)
-the inputstream and json reader reader can easily absorb the json within the heapspace limits.
-the for loop will create an array of JsonValues, which can than be ungrouped and further processed (with JSONPath per row)
-the for loop however takes a while as it will have to copy the data from an ArrayList to a native array of JsonValues

so your flow will remain on 99% for a while, but it will make it through

1 Like

Happening the same with the JSON Reader. Windows 10 on SSD, 8GB + I5, last version to 19/12/2019, already tried to write tables in disk, -Xmx7048m, reading a JSON file of about 1GB size.

I managed to use PowerQuery and export to XLS, and while it’s very slow to load the file Knime seems to handle it, but this is suboptimal and if I’m using Knime is precisely to not use PowerQuery…

1 Like