TF Performance improvement

Hi, how is it possible to speed up the TF process?

Situation: One 150Mb textfile calculate TF.

Flat File Document Parser then Number Filter then Punctuation Erasure then Stop Word Filter then Bag Of Words Creator this is very fast it takes 2-3 seconds but then TF Node takes over 10 hours.

The machine is used are a gaming computer.

What is a good way to speed up this execution?

Thank you!

Hey @Mink,

assumed you run Windows as OS, what does the task manager show about “Performance”?
Is the CPU at top or the disk I/Os or the Network…?
Depending on that info, the community could narrow down some optimization alternatives.

Kind regards,


Thanks for your answer. Ubuntu 16.04 i use. The cores usage are on 100%. In general what is a fast way to convert a big txt file like more GBs to a KNIME Document type? This takes very long too. The nodes between Flat File Document Parser and TF are fast. There are some settings to solve this?

Hi Mink,

Please have a look at this blog post to get some pointers for performance tweaks:

If you haven’t increased the default memory allocated to KNIME, that should be your first step. In addition, you could try to narrow down your feature space even further. E.g., you could use as POS tagger node first, and then a Tag Filter to only select nouns.


1 Like

Hi Roland. Thanks for your answer. I set the allocated memory to a higher size. But it’s still slow. I found a solution for a faster execution I split this 150Mb file to 3200 files and then the TF node takes 32min to execute. This works for my task. Thanks.

Hey @Mink,

some quick questions so others might be able to re-use your approach:
Did you split the documents using node(s) and if so, could you post this workflow?
Do you run the documents through the TF nodes sequential or parallel? If in parallel, how many TF nodes do you run at the same time?

Kind regards,


This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.