I have to do ~200 times of row filtering within a loop that takes most of the time of the loop. Now the total filtering does take about 10 seconds what is 0,05 s per row filtering. That does you seem much but it add and is to much for a liver operation.
I already have done this:
parallelize the loop, so I have now 8 parallel loops. (the number of CPU cores)
put a “Cache” node before the loops. This improves performance by factor of 6-12!
I do interpolation (via missing value node) and the loops are for separating the different signals in the same data table. If I interpolate without loops I get interpolation errors at the “border” of the signal.
I did some improvements due to your hints.
I have 500.000 rows in 220 groups and do for each group the linear interpolation. I already tried to put the sorter out of the loop but it had no performance improvement.
Grouping with the rank node has no improvement if I consider the time for the rank node and the cache node has now also no advantages anymore.
This lasts 10-12 seconds.