For the Update / writer nodes, increase the JDBC batch size in the nodes Advanced config to 100. (IMHO this should be set as default as no one will every use a setting of 1 in a real-world application because it’s so slow.
Hi Ivan,
if you enable the JDBC logger via the Advanced tab of the DB Connector node you can find all SQL statements that are send to the database including the insert/update statements in the KNIME log. The nodes them self do not output the SQL statement.
Regarding the second question, why do you want to update a column with its own value?
I guess you have a high resolution display. Currently the font size of the SQL editor is fixed but I will create a feature request to make it adaptable. I guess you will have the same problem with other nodes that have an editor such as the R/Python nodes, Java Snippet or String Manipulation node.
The new db framework comes with the new DB Merge node which you are looking for. If the db provides a special command for this the operation is performed with a single statement e.g. the merge statement in Oracle.
Hi Ivan,
which database you are working with and experience slow performance for the update? Most of the time the update is very slow if you use columns in the where clause of the update that are not indexed. Creating an index for these columns in the database usually increase the update performance dramatically.
Bye
Tobias
I’m using PostgreSQL Connector and not DB Connector but I guess there is no difference so I can switch and try it.
I’m not trying to update it with its own value but based on its value. I’m trying something like this: UPDATE table SET column1 = value1 WHERE column1 =value2;
Don’t have problem with Java snippet or String Manipulation node. Here is a picture:
Have you also tried the new DB SQL Executor node which is part of the new DB framework? I would be interested to know if this editor also has the problem with the small font size.
The new DB Writer/Delete/Update/Merge node also come with a default batch size of 100. Optimizing the default batch size based on the available memory or number of cells in memory could be complex since a single cell could already contain a lot of data if it contains a complete document that should be written into a BLOB column in the database. That is why for now we have decided on a fixed batch size of 100 which should be ok for most cases.
Tobias
Hi Ivan,
the new batch size is only default and can be changed in the node dialog of the db manipulation node e.g. writer, update, delete and merge.
Bye
Tobias
Hey there, I had the same issue… and was able to solve it really simply, KNIME has a KNIME.ini file, this one is like the paramethers KNIME uses to execute…
The real issue is that JBDC driver is set for 10 Fetch Size. By default, when Oracle JDBC runs a query, it retrieves a result set of 10 rows at a time from the database cursor. This is the default Oracle row fetch size value… so whenever you are reading database you will have a big pain waiting to retrieve all the lines.
The fix is simply, go to the folder where KNIME is installed, look for the file KNIME.ini, open it and then add the following sentences to the bottom, it will override the defauld JBDC fetching, and then you will get the data in literally seconds.