Unhandled exception in processFinished

Hi,

I have developed a node which processes rows in an independent way. To do this, I have extended AbstractCellFactory in my own class.

When I enable the multiprocessing flag:

this.setParallelProcessing(true);

and the input have around 70000 rows, I get the Exception [1]. It does not happen when the input is smaller (15000 rows).

Honestly, I don't know what I'm doing wrong and how can I extract more information about the root of the problem.

 

Another thing that I have tested is the following: using a Chunk Loop Start + Loop End i have split + accumulated the input in chunks of 15000 rows, and this works fine!

I would appreciate any comment / insights regarding how can I get more information of this problem.

Thanks in advance!

Oscar

[1]

2013-02-06 13:48:30,271 DEBUG KNIME-Worker-8 RearrangeColumnsTable$ConcurrentNewColCalculator : Unhandled exception in processFinished
org.knime.core.data.container.DataContainerException: Writing to table process threw "InterruptedException"
    at org.knime.core.data.container.DataContainer.checkAsyncWriteThrowable(DataContainer.java:579)
    at org.knime.core.data.container.DataContainer.addRowToTable(DataContainer.java:840)
    at org.knime.core.data.container.RearrangeColumnsTable$ConcurrentNewColCalculator.processFinished(RearrangeColumnsTable.java:728)
    at org.knime.core.util.MultiThreadWorker.callProcessFinished(MultiThreadWorker.java:319)
    at org.knime.core.util.MultiThreadWorker.access$0(MultiThreadWorker.java:301)
    at org.knime.core.util.MultiThreadWorker$ComputationTask.done(MultiThreadWorker.java:457)
    at java.util.concurrent.FutureTask$Sync.innerSet(FutureTask.java:281)
    at java.util.concurrent.FutureTask.set(FutureTask.java:141)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:339)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:124)
    at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:239)
Caused by: java.lang.InterruptedException
    at java.util.concurrent.Exchanger.exchange(Exchanger.java:685)
    at org.knime.core.data.container.DataContainer$ASyncWriteCallable.call(DataContainer.java:1336)
    at org.knime.core.data.container.DataContainer$ASyncWriteCallable.call(DataContainer.java:1)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)

The "InterruptedException" usually indicates that the user canceled the processing. I guess that's not the case for you?

Does it succeed when you "setParallelProcessing(false)"?

Try also this java property (as VM argument): -Dknime.synchronous.io=true Maybe there is an exception thrown while writing your data to the temp and it's hidden in some other thread logger.

Hope that helps!

 Bernd

Hi Bernd,

thanks for your message. The -VM argument:

-Dknime.synchronous.io=true

has solved my problem.

Initially, this problem only happened to me only when:

    setParallelProcessing(true)
    Large inputs (more than 70000 rows.)

I would like to investigate a bit more about what is happening to the node and if I can to improve the code. As you have said an exception is masked by other thread. I suppose (please correct if I am wrong) that this exception is raised in my own code, and I do not handle it. How could I detect or be sure that I am not doing "dangerous things" in the node ?

Oscar

PS: Let me insist that this issue didn't happen to me if the input was smaller, chunks of 15000 rows. Could this be a clue to figure out what is happening.

thanks in advance.

It should print a stack trace of the problem to the log file (try "View" -> "Show log file" or find the path of the log file as one of the first lines in the console log viewer).

I guess you don't see the error when you have a the 15k row table because it's kept in memory. You can force it to disc in the config dialog ("Memory Policy").

- Bernd