how are you joining your data?
Each dataset with a cross join?
Something you could try is to reset the rowid after a join (so you do not get stacked Row1_Row2_X as IDs in a join.
How many columns do you have? Maybe you could remove columns and later do a lookup after completing the large joins.
Did you enable keep in memory in thr joiner configuration?
But I think Iris suggestion is a good start
Regarding the question for your specs - it depends on what you are doing. I think in most cases memory and cpu will be relevant however some learning modules might facilitate the GPU as well. However why not just test it with your usecase yourself? Start your heavy workload process and check e.g. in the task manager what is the most used resource
@Alkaline could you tell us more about the nature of the joins? Do they involve single IDs and are these IDs strings or numbers (or a combination).
Is there an error message if any. Can you check if you run into problems with Java heap space.
Then you could try and tell KNIME to do everything writing to disk (in case you experience memory problems). Or you could do one join per workflow and save the result in a table.
Further hints about KNIME and performance can be found here.
Another thing you could try is install a local database like postgres or MariaDB and see if they are able to make the join and use KNIME as a front end.
thank you all for the good suggestions. I will try the joiner (labs).
I have these tables and the common identifier is a string (letters,numbers & symbols). The tables do not have many rows, each one 4-5. I want to bring all these information together into one table. After the second or third join, there is no more progress, the percentage does not increase any longer. I will try to write everything on the disk, as I think that it might be memory problems (just a feeling). And will also look into the metacollection :).