ERROR Cancelling Statement due to User Request

I developed a work flow in KNIME 2.10.3 locally on my Windows PC, which used the new Database nodes to carry out a series of reads, selections and joins, all within the database, culminating in overwriting of several tables each, with a list of products made on specific lines. On execution the expected warnings concerning existing tables being dropped are issued but the workflow works perfectly. This is a somewhat unusual case as the database in question is PostGresql but realised via PivotalHD HAWQ interface onto a Hadoop distributed file system, but to all intents and purpose, it looks and behaves in SQL terms just like a PostGresql database.

I then transferred this to a LINUX (Centos 6.4) machine (which is hosting the Hadoop cluster) and now find that problems occur during the execution of the same workflow. The problem seems a bit random (unfortunately) in that errors occur for some of the 'database connection table writer' nodes resulting in failure to create the new table. The error delivered for these failures is 'Error-Cancelling statemenet due to user request'. The output tables affected vary (although there are 2 nodes which never seem to cause problems), sometimes suceeding and sometimes failing.

I realise this is a bit of a long shot but does anyone know of a possible explanation for this behaviour, perhaps relating to the 'Execution Monitor' class ?

Best regards

 

Steve Thornton

 

Hi Steve,

I do not hink that this error is related to the Execution Monitor or any other KNIME db class. It seems the postgresql driver throws this exception when a seesion timeout occurs. In order to prevent this you can update the database session timeout in KNIME via File->Preferences dialog in the KNIME->Databases section.

Bye,

Tobias

Sorry to have been slow to update on this, but increasing the time out values as suggested did not seem to improve the situation. We do seem to have 'solved' this issue however by changing the memory policy for upstream nodes to 'write to disk' rather than 'keep only small tables in memory'. As stated previously however, the work flow functioned perfectly well on a Windows machine and the problem is seen only in LINUX system (where there is a lot more memory available). Maybe there is a clue in this behaviour ?

This is very likely a bug in the Postgres JDBC driver, see http://stackoverflow.com/a/14158622. We re-use connections as much as possible. Therefore it may happen that a node gets an "old" connection whose status is not completely reset after the other node has finished. We are also getting these errors messages occasionally during our nightly tests and haven't found a way yet to resolve them without completely re-writing the connection handling. This is on the list for one of the next releases, though.

Hello everybody,

I encountered this problem today using a Database Reading node in a workflow that make high usage of DB reading. I'm using KNIME 3.1.1. Please, do you have any news from that front?

Alternatively, is there a way to try to re-run the node after a certain time in case it failed? Or do you have any other workaround suggestion?

Thanks in advance for any help!

Hi again,

I found a temporary workaround for this. It consists in placing a Workflow Control Wait node in front of the Database Reading failing node. I set the node in order to wait 1 second and, for what I saw so far, that is enough to bypass the problem. I don't know exactly why it is working, but it does.

Anyway I hope the Postgres JDBC driver bug will be solved soon.

Cheers