Memory problems with DB Writer

I am using the Knime Database Writer node to generate and large table in a MySQL database. The data is processed in chunk of eg. 1000 rows at a time and then written to the DB. The database table itself already grew beyond many thousands of rows and it looks like that the amount of memory that the writer node requires somehow is affected by the size of the table to which it appends. Is this possible? I am starting to run into heapspace issues even if I reduce the data chunk size within Knime to a single row. Is there a way to solve the problem? I guess it should be possible with JDBC to append data to a table without caching to whole table into the JVM memory (although I might be wrong with this assumption).

I remember the MySQL JDBC-driver not honoring the chunk size setting, but I'm not completely sure. Thomas can comment on this but he is currently on vacation.