Tweaking/Improving KNIME Performance

Hello Community,

I prepared a Workflow with Keras Nodes. Image Size is 512x384 with around 2000 pictures.
On my Notebook(Intel i7, 16 GB; KNIME 12GB) and 72 Epochs and the Learner Nodes needs 13 hours.
On our Mac Pro (Intel Xenon 12 Cores, 64 GB; KNIME 56GB) and 100 Epochs the Learner Nodes needs 36 hours.

Is there a way to improve the performance?
Already checked the Blog.


Hi Sven,

I can only make some general suggestions as I am also a bit puzzled as to why training is so much slower on your Mac. Maybe someone else with more experience in using deep learning in such a configuration can help you better.

Since most of the workflow execution time is probably spent on learning the Keras model (and is therefore actually spent in Keras/Python and not in KNIME itself), tweaking KNIME specifically will not help too much here. You should instead focus on the Python process that runs the actual model training.

As a first step, you could try to decrease the amount of memory that you reserve for KNIME (e.g. to only 50 % or less of your total memory) and see if the Python process can benefit from that by monitoring and comparing the memory usages of KNIME vs Python while running the workflow. Also check how many CPU cores are being used by the Python process, maybe Keras does not make full use of the available cores for some reason.

If you have access to a machine with a (reasonably powerful) GPU, you could also try to do the model training there as deep-learning applications generally perform much better on GPU (just make sure to install the GPU-specific versions of Keras/TensorFlow then).



This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.