I’m working with large data sets, but after 100,000 rows, no more data is produced and passed on down the line for processing. I’ve tried modifying the knime.ini file with
to increase the amount of rows saved in memory, a tip from the blog., but this still results in 100,000 rows being the maximum amount of data. I estimate I need 5-to-6 times this amount for my work. The only other option I see is chunking, but I see this as a last resort method.
Hi there @Benjamin_B,
what does it mean no more data is produced? All data should be processed no matter of cellsinmememory configuration option.
Regarding the blog post some things have changed (with KNIME version 4) so check this document for more info.
I am extracting data from a server, but only 100,000 values are extracted (see image).
It appears that the rows with numbers >100000 are overwritten, as only the most recent values are displayed (and passed on down the workflow).
you are retrieving data from Server using Generic Web Services Client I guess? Do you get all expected data from that node?
When the data amount is not as large, yes. Using another SQL query program I can retrieve all data, but this is external to KNIME and I can’t automate the data extraction with said program.
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.