I get the following error message when I try to execute the Feedforward Learner (e.g. the examples from the server):
ERROR DL4J Feedforward Learner (Classification) 0:14:10 Execute failed: no jniopenblas in java.library.path
ERROR DL4J Feedforward Learner (Classification) 0:14:10 Execute failed: Could not initialize class org.nd4j.linalg.factory.Nd4j
KNIME Version: v3.4.1.v201709070950
DL4J Version: 3.4.1.v201709070951
OS: Windows 7 64bit
thanks for reporting that problem. Did you maybe try to automatically let KNIME install the DL4J nodes when they were not present when you first opened a DL4J workflow? Also, could you maybe attach the KNIME log file here? That would be very helpful.
I just let KNIME install the dl4j nodes, didn't help.
Attached the log file.
from the log file everything looks fine. Could you try reinstalling DL4J manually using the update-site? Maybe you have to uninstall it first under Help > Installation Details, then select KNIME Deeplearning4J Integration (64bit only) and hit uninstall.
I tried reinstalling DL4J and also KNIME. Nothing worked.
It looks like that several users of the deeplearning4j platform had also the same issue.
It was also mentioned that the issue might be connected with an Anaconda installation (I have installed version 2 on my PC).
Thank you for your research. We had this problem before in an older KNIME version but I should have been fixed. The problem was related to conflicting dll files on the system path regarding blas and mkl, because anaconda added some of these to the path variable. Now there are several things you could try to resolve the problem:
1. The easiest: try to add the line -Djava.library.path="" to the top of the knime.ini
2. Remove anaconda (all paths pointing to folders containing some dll files; removing the one containing something that looks similar to "mkl.dll" will probably be enough) from your path variable and restart your pc. Note: this may break something in Anaconda (but you can always add it to the path again)
3. If possible: uninstall your current Anaconda and install the newest version
4. In the links you provided: someone mentioned that a space in your username may lead to strange problems. If possible, you could try to run KNIME with a username without spaces
option 1 didn't help I will try the other options shortly.
I have another question regarding the available activation functions. The site https://deeplearning4j.org/neuralnet-configuration.html lits the "maxout" function but I can't find it in the dl4j knime integration. Is there a rough timeframe when it will be available?
Same question above for batch normalization (weight initializer) :)
We have both of these things on our list, however I can't promise on any date unfortunately.
after removing all the anaconda PATH environment variables this was resolved…
However, then I updated conda to 4.4 and now nothing works again:
ERROR DL4J Feedforward Learner (Classification) 0:28:22 Execute failed: (“ExceptionInInitializerError”): null
ERROR DL4J Feedforward Learner (Classification) 0:28:22 Execute failed: Could not initialize class org.nd4j.linalg.factory.Nd4j
Anyone know what the Python preferences should now read for? Typing just Python3 doesn’t work and pointing to the knime_keras env doesn’t work either. once again replace the xlsx extension with log.Deeplearninglog.xlsx (3.0 MB)
my log file is enclosed.
the cause of your problem is most likely the following line from the .log file:
java.lang.OutOfMemoryError: Physical memory usage is too high: physicalBytes = 3G > maxPhysicalBytes = 1G
In the DL4J preference page is an option to adjust the maximum OffHeapMemory DL4J is allowed to use. If you increase this value, it should work again. You can read more ab out this here under Off Heap Memory Limit.
Thanks - I have now adjusted the off Heap to 10G in the preferences panel and 32G in the knime.ini file.
It took 3 attempts for KNIME’s preferences to set this, which is weird.
But it works now. Yipee.
I’m glad that resolved the issue.
Thanks for reporting the problem with the preference page. There seems to be some bug in there.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.