KNIME workflows corrupted


I uploaded few workflows on RHEL 7 server. I usually archive all files of a workflow in the workspace into tarball and upload them onto RHEL 7 workspace. I don’t have GUI access to KNIME on the server. Few workflows would never execute as I found few nodes were missing upon upload. Also, few other workflows eventually would stop execution due to the same reason. This is mainly happening during upload / download of workflows and cleaning up temp files. Though I stop all KNIME processes before cleaning up the temp. This still happens.

Please let me know if there is any better way to upload and manage the workflows to keep the nodes intact.


Hi @AJA,

Are you running a KNIME Server on the RHEL 7 machine?
If so, all users on your team can use a local KNIME Analytics Platform to deploy workflows to the server, see KNIME Server User Guide.

Should you be running a KNIME Analytics Platform on the server, the setup is indeed finicky and rather limited (which is the exact reason we developed KNIME Server).

Kind regards

Hi Marvin,

Thanks for responding!
I am running KNIME Analytics Platform on RHEL 7. The workflows are fine on my local machine. It occasionally gets corrupted when KNIME crashes. Also, the KNIME Analytics Platform clogs almost all the memory allocated to it though I am executing one or max two workflows and crashes. Any other setting I need to change here to optimize memory usage??


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.