Workflows crashed when visualizing massive data

Hi all,

We’re migrating to Knime Business Hub and in the phrase of validating whether all the workflows work as expected (after migrating/adapting).

We have actually a problem with some workflows that permit to visualize and modify an huge amount of data (using Table Editor node).
These workflows worked well on Knime server (with -Xmx48g) but they crash on Hub with the following error message :

upstream connect error or disconnect/reset before headers. reset reason: remote connection failure, transport failure reason: delayed connect error: 111

Nothing found in the executor log. Inspect the workflow from Analytic platforms : all nodes are well executed.

We seeked out for more information with our admin, and found an OOM in the rest-interface logs. But we have our execution context already configured at 60g and when the workflow crashed, the memory usage is pretty low (not really change) ~20g over the limit of ~80g.

Any idea to fix the situation? Many thanks in advanced!

Best regards,

Thanh Thanh

Hello,

Please reach out to KNIME Support directly at support@knime.com, so that we can review a support bundle/logs from the cluster and help you troubleshoot directly.

Thank you,
Nickolaus

1 Like