Then you could try and deactivate the “retrieve in configuration” setting in the Connector node. If this is active it will try and make sure that every setting and column is present before you are able to access it, which is useful but might slow down or hinder things with really large databases. The downside is you would have to know what you are doing without additional checks.
You might see if there are further tweaks to the connector settings. Especially concerning timeouts (increase the time KNIME has to finish the operation).
And then you might want to make sure there is a maximum of Java Heap space available:
@Awiener the hint of @NurSenaAlici to COMPACT the database is the right approach. Also make sure you do not hit any restrictions that MS Access might have.
further down in this thread there are options to handle larger local databases if you do not have to use MS Access.
Creating the connection takes ages, so I’ve enlarged -Xmx. That helped. However performance is still bad, so I‘ve “converted” the .mdb to .sqlite with the help of this workflow:
Now we work with the .sqlite which is tremendously faster in reaction. As is it just a minor data source, that works for the workflow.
@Awiener glad you found a solution and SQLite is a good choice. If you need an alternative for also large datasets or different type handling H2 is also a nice local option where you can split very large datasets across several local files if necessary.