"Pythonic" KNIME, i.e. increasing efficiency

I want to reformat my workflow and make it the KNIME equivalent of "pythonic." It's a big data project (>2M rows per table) where I begin with 23 tables, execute hundreds of queries to transform them (each query represented by a node in KNIME) and the output is a pared list of tables, some for visualization and some going to trained models for prediction. It's extremely expensive for time and computing resources. I know I can make the queries more efficient and I'm currently working on that, but are there other ways to leverage KNIME features to speed things up? Make me a smarter KNIME user!

Any thoughts, suggestions let's have it. Thank you, all!

My previous post.

Hi wronag,

You mentioned that you are working on a big data project. Are you using Big Data nodes?

There are different ways to optimize workflows, but it really depends on the type of operations you do. Are there specific parts of the workflow that need more work than others?

Anyway, I would suggest you to take a look at this blog post: https://www.knime.com/blog/optimizing-knime-workflows-for-performance.

It gives some hints about how to optimize KNIME workflows for performance.

Hope that helps!



Thank you, Vincenzo. I have found utility in the linked article.