I am connecting to a database with JDBC. I identified some differences using the Database Reader and the Database Reader legacy node:
Database Reader Legacy: I am limited to download 11 Mio rows. However, using SQL I can do a good refinment of the rows of interest and define until 11 Mio rows which data to download
Database Reader: I dont have a limitation to 11 Mio rows. I could already download a dataset with 42 Mio rows. However, when I am using SQL to define which rows I am interessted , I am limited to 1.1 Mio rows.
I don´t know if these limitations come from the database, but as they differ with both nodes, I think this can come from KNIME. Is there a way to have the best of both worlds? Ideally don´t have any limitations?
when it comes to executing the query and fetching the raw data there is no big difference between the legacy and the new database framework. They both use the standard JDBC methods to do so.
Can you please check the following:
Do both nodes execute the same statement (see the outport view of the predecessor node)
Do both nodes use the same driver and version
If this is the case please let us now which database and driver (name and version) you are using and ideally the SQL statement if it doesn’t contain any sensitive data.
thanks for your reply. You are right, I also changed the driver during the update from the legacy node (old driver) to the new DB connector (new driver). So the limitations are then most possibly from the driver?
With the newer driver (without import limitations) I currently just imported the complete dataset and then used the standard KNIME nodes to filter and reduce the columns. Not ideal, as it takes longer for import of several million rows, but it works.