DB Writer to Access DB Hangs at 99%

I have a basic workflow where I am attempting to write a 3.5M rows into a freshly created DB table in Microsoft Access.

The nodes are:

The config at the Access Connector Node:
image

The config at the DB Table Creator Node:
image

The config at the DB Writer Node:

In File Explorer, the file looks like it’s being worked on:
image

I’ve already gone through the UCanAccess process, but same problem there. Back to using the driver per above config snapshot.

I’m frequently working with KNIME SQLite databases in KNIME with similar data sizes, but never really see anything get stuck like this at 99%.

In Debug mode, I see the following messages being looped over:
DEBUG MemoryAlertSystem 0/1 listeners have been removed, 1 are remaining.
DEBUG MemoryAlertSystem 0/2 listeners have been removed, 2 are remaining.
DEBUG MemoryAlertSystem Memory collection threshold of 88% exceeded after GC
DEBUG MemoryAlertSystem Memory collection threshold of 67% exceeded after GC
DEBUG MemoryAlertSystem 2 listeners will be notified.
DEBUG MemoryAlertSystem 1 listeners will be notified.
DEBUG MemoryAlertSystem 0/1 listeners have been removed, 1 are remaining.
DEBUG MemoryAlertSystem 0/2 listeners have been removed, 2 are remaining.

Hello @qdmt,

perhaps you are hitting some Microsoft Access DB limitation:

Br,
Ivan

4 Likes

I think you’re right. The file gets stuck at 2GB.

Is it expected that’s a 100mb source CSV would blow up to 2GB?l in Access? There are no transformations taking place in the workflow - just a straight transfer.

I’m further reading that yes, it is expected given that Access is just an inefficient database overall :). Cheers

2 Likes

@qdmt one idea could be to test H2/SQL database:

http://h2database.com/html/advanced.html#limits_limitations


You could create a new database connection with a current H2 driver (from Maven) and add the “split” parameter to the connection string:

jdbc:h2:split:[location=file?<file>][location=in-memory?mem:<database>]

You will have to select this new “h2_split” database driver in the H2 Connector instead of the built-in one:

The data would then be stored in a local H2/SQL database in chunks of 1 GB. So you have more flexibility with the size. You can later compact the database (which may take some time).

kn_example_db_h2_large_files_split.knwf (57.2 KB)

3 Likes

Thanks very much @mlauber71. This particular instance I had an external constraint of using Access, but I just tested out the H2 connector and it worked well.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.