Hello, I was searching for a workaround to export a huge MS sql table to csv file. Like 150mln records.
The problem occurs because for export to csv table I need to first fecth all the data with the Database Connection Table Reader node. And since the table it’s so huge, it cannot fetch all the data together. And this node continue to load cause it’s not able to fecth all the records together. It would be impossible to SELECT *.
Is there such a way to directly export from database to csv without the fetching part?
I have tried also with offset flow variable within a loop but with no good results.
Or maybe a way for successfully execute the export?
Thanks ins advance.
have you thought about doing it in chunks? And then appending the result to a CSV file. The example attached uses ROWIDs from SQLite. You could see if you either have a ROWIDs present or you could create one yourself. You can choose the size of the chunk/batch yourself (I use integers you might have to see if with very large numbers you might need big integers or something. Even a workaround with using strings might be possible).
Only other idea would be to write a CSV file directly on a Big Data (Hive) cluster, but that would not work with an MS sql table.
kn_example_huge_db_to_csv.knar (433.4 KB)
Do you perform some modifications/transformations of data from database? If not I think there are other ways to export data from a database to a csv which might be a bit easier and faster
Hi @ipazin , like what?
It’s just SELECT * without transformations.
I guess you have SQL Server Management Studio or how do you connect to your database?
Here is link to check out if my assumption is correct:
You can use KNIME for this task as well but KNIME is more if you would like to parameterize, automate, transform&modify and/or analyze you date…
Hi @ipazin, thank you! your suggestion actually solved my problem.
I just used the bcp utility command line from MS sql.
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.