we have a workflow that calls several subworkflows on the server using different “Call Workflow (Table Based)” nodes sequentially. The workflow runs completly fine with smaller test sets (4 rows -3000 rows) but breaks when the test set contains up to 5000 rows. I split the test set and tried the different splits so I am quiet sure it does not depend on some rows containing invalid data. We have a current solution where we put Chunk Loop (Rows per chunk 3000) Start before the “Call Workflow” nodes and that works so far but makes the whole process much slower.
Does somebody have similar experience with calling workflows and row limitations or am I missing something. I did not play much with the Base timeout option. It would be great if we could get rid of the loop workaround.
What are the error messages you get when you use the entire set of rows?
Have you played with the “Job Status Polling” settings in the Advanced Settings tab?
More info about theses settings can be found in the node description.
thank you very much for your fast reply. I did not play much with the “Job Status Polling” in the Advanced settings: what I tried is to increase the “Base timout [ms]” from 1200 to 3000 ms but still files with rows over 5000 crashed my workflow (unfortunately I cannot provide an example workflow). The individual workflows (subworkflows) work fine without row size limitations by the way.
I did not get a particular error message just that the Executor is irresponsive, which makes sense because we had to restart it. As said that does not happen with smaller files and when using the Chunk loop start node, reducing the input to 3000 rows per loop.
I know with the information given there is not much you can do.
I need KNIME server and executor logs to understand the issue better. As a KNIME server admin, you can get them via the KNIME Webportal Monitoring page (Monitoring → Logs).
**As these logs could have sensitive information in them, it is better if you write an email to support@knime.com. **