I'm trying to speed up my pipeline execution using Parallel Chunk Start/End nodes. Between them I put FileConverter followed by FeatureFinderMetabo among other nodes that are less significant in my case. The issue is that I get the following error in log from time to time:
"ERROR FileConverter 0:255:280:295:277 Execute failed: ("ConcurrentModificationException"): null
ERROR Parallel Chunk End 0:255:19 Execute failed: Not all chunks finished - check individual chunk branches for details."
and the execution stops. It happens, in particular, when I pass many files to the workflow and Parallel Chunk Start uses up to 8 chunks. Any ideas how to avoid this? I can share this problematic part of the workflow if necessary.
Ok, so what I can see from the information you gave us, is that the actual error is an error from Java (i.e. KNIME itself). However, it suggests to look at the individual, expanded branches of the loop execution, because one of them seemed to produce an error.
This might be an OpenMS related problem but you would need to dig deeper and find the branch where it goes wrong. I think you find the additional individual branches in the meta/sub node that branches of the Parallel Loop Start.
If you find that it is an OpenMS node that results in an error, you can try to right click it and send us the report from "StdErr" and/or "StdOut".
Keep in mind, that the parallel execution nodes are still in KNIME Labs (i.e. Beta) and in combination with OpenMS nodes, there might be additional sources of error e.g. from simultaneous access to files.
You could also try to limit the number of branches (configure parallel loop start). The default just maxes them out to the number of processors you have. Try a smaller load.
Sorry that it took me so long to respond to you, but the problem is not reproduced regularly, so, I've faced it just today, 7 days later. This time I got the same error from a FeatureFinderMetabo node. Both StdErr and StdOut are empty. If I rerun the branch where the error occured it continues working correctly.
The most important thing is that it doesn't spoil the workflow results, and the execution can be resumed. But, yes, it'd be nice if this concurrency problem is fixed at some point.