Hello, I operate a KNIME Workflow, which extracts data from a big Oracle Table with “DB Query Reader” Node and write the extracted data after few transformation in other table in the same oracle Database.
This runs in a loop with more than 7000 iterations (more than 7000 key fields), for every key-field new iteration with adapted SQL like:
select from…where key_field= 12345678
(A SQL with e.g. 1000 key_fields at once in where statement don’t work, because the source table is too big.)
This workflow is daily operating for more than 2 years, runs for aproximately 4 h , but worked fine (very stable).
Unfortunately In recent time the execution hangs up for no reason. Just e.g. the 1000 th or the e.g. 6000 th iteration (it is completely arbitrary) hangs up in the “DB Query Reader” Node or in the “DB Writter” Node. An there is no Error- or Worning message or special Info entries in the knime.log.
I increased in the advance setting of the “Oracle Connector” Node the “retrive in configure time out” with no real sucsses. I also enabled there the JDBC Logger:
But ther are only informations JDBC entries and the last entry before an unwished endless execution of the “DB Query Reader” node is as always:
“…Query Reader 2:722 : : jdbc_audit : DB Query Reader : 2:722 : 79554. Statement.executeQuery(sql=): select 1.489008875E9 zaehler, TO_DATE(to_char…”
But after that it comes not as usual:
“…DB Query Reader : 2:722 : 79544. The connection has been closed (relinquished)…”
and the execution runs for hours and never ends, unless I manually cancel it.
I am very grateful for any idea how I could solve this problem. Are there some possible JDBC Parameter entities in the advanced settings of the “Oracle Connector” Node, which could help?