I am using the schedule option with a main workflow wich calls various local workflows to create PDF files.
Main Workflow => Calls LocalWorkflow (Create Report) => Call Local Workflow (Create different pages)
Sometimes the workflow (Call Local Workflow (Create different pages)) crashes. Execute failed: NullPointerException: null
Call Local Workflow (Create Report) => Call Local workflow (Create Pages xyz)
Input is a table with ~1200 rows.
Can someone tell me why the error occurs sporadically?
do you happen to have log files available for the (sometimes) failing workflows?
Can you create a smaller example that still fails? (e.g. just one level of calling other workflows)?
where exactly can I find the correct logs?
Ah apologies. In the KNIME Explorer under the connected server workflow (specifically the workflow that fails) you should find jobs that ran the workflow. Right-clicking a job should display a “Show workflow messages” option.
nothing special in their (i thnik).
except the null pointer exc.:
Call Local Workflow (Row Based) 19:595 - ERROR: Execute failed: (“NullPointerException”): null
error.txt (5.0 KB)
Hm. Do you know which workflow is being called in the failing node? Maybe that respective job has some useful information. Otherwise we are left guessing right now.
An admin could grab the server logs from <executor-workspace>/.metadata/knime/knime.log (and send them to email@example.com).
Or as mentioned in my first reply trying to build a smaller workflow that runs into the same problem might help narrowing down possible causes.
the error is sporadic and i cannot reproduce the behavior.
Sometimes it crashes with just one row. Then when I restart it it works.
Sometimes it works fine with 1200 rows sometimes not.
And ist not always the same call local workflow node.
I will send the LogFile to firstname.lastname@example.org.
I’m afraid I missed your log file. Do you still have the file and could send it again? Or have you made any other observations in the meantime?
see Request No. 20549
Ah excellent, then I was just to slow to grab it.
Our developers are still at it. Thanks for your patience and apologies for the multiple communication channels