Hi. I am using Knime server and I created a lot of workflows for the batch. Every workflows has db writer node or sql executor node. Unfortunately, few of sql executor node or db writer node never execute. Like this.
it has been running for more than 6 hours. Volume of Data is not that big, but I got many other workflows scheduled at adjacent time to this workflow. What would be the problem? and Pls tell me the solution. Thank you Forum!
Hi @wjsqorwns93, my thoughts on this are that if you have multiple flows writing to the database at the same time, are they inserting or updating the same tables and if so, are they committing?
Possibly this flow is waiting for the release of a lock on data it is trying to access or write to?
Does this workflow work if it is the only thing running against the database?
This is something I have at least with some db connections in knime.
If you use the same session in multiple workflows it might wait for other workflows to commit/finish - depending on the connection this does not have to be a locked table but also just a different commit running in general as far as i know.
Do your workflows share the same temporary/volatile tables? Then forcing different workflows to use different sessions might work (or changing specific jdbc connection parameters)
Also this could happen if the table is currently locked by another process/workflow - so the same as what @takbb already said
Thanks for the quick reply!
first answering for the first question,Mostly they are not sharing the table. each workflows has each table to update.
Every morning I delete this job(because cancelling button won’t work) and restart it and it works. So i changed the schedules several times, but at least one of the workflows I scheduled in the mornings always cause the same problem.
Hi @wjsqorwns93 , in most cases, this happens because of what @takbb and @AnotherFraudUser have said already - table waiting.
A few questions for you:
- What is the frequency of running this workflow? Though the tables are not shared by other workflows like you said, could it be that the workflow itself is re-executing before the previous instance completed?
- You said “mostly” they are not sharing the table, does this mean that some do?
- And finally, is there any reason why you are closing and re-opening connections? You can chain the DB SQL Executor one after the other if they are using the same DB server. Also, you can execute multiple queries in one same DB SQL Executer node. You just need to define a query terminator, usually you can use the semi-colon like you would use in an SQL client, and make sure you check “Support multiple SQL statements”, like this:
Of course, it does not mean that you should always run everything in the same node. It’s OK, and even better to separate in some cases. Just making sure that you know that you can run multiple queries in the same node, and also run multiple nodes with the same connection.
A post was split to a new topic: DB SQL Executor Error