Xls reader generating memory problem with large files

I'm looking for a workaround to solve memoryproblems while reading in large spreadsheet files wilt the xls-reader.

Is it possible to use xls-reader only reading a certain amount of rows per table and saving only these rows to a database, looping through that table until all rows are processed and the complete table is saved in the database?

Hi marcv, 

Unfortunately, I don't think that will help as I belive we would still need to parse the whole file.  One quick option that may fix the problem is to change the -xmx argument in the knime.ini file in your installation directory in order to allow KNIME to use more memory.  For example, if you want KNIME to use up to 4 gigabytes of memory, you sould change it to -Xmx4g.  

Another option might be to export the data in a different format, such as CSV which is much more memory friendly to read. 

 

Hallo Aaron,

I tried slicing sheets with the node 'group loop start/end'. Doesn't work: sheets are being read in as whole file.

My largest sheet has got aproximately 700.000 rows. Tried setting bigger heap space with no luck.

In order to prevent memory problems maybe the xls-reader should be able to read in chunks of rows in stead of reading in all rows in one go.

gr.

Marc