I would like to process an increasing partition of some data table. I assume this is possible using KNIMEs Loop functionality. So what I want to do is get the first row from the table and run it through some workflow, then get the first and the second row and run it through some workflow, then get the first, second and third ... and so on.

Is it possible to achieve this task with the current nodes, or do I need to implement my own start node? If so, are there examples on how to implement loop start nodes?

Quite straight forward this one. Simply use the chunk loop start node and at the end of the loop use the loop end node.
In the chunk loop start node you can define how many rows to loop with at a time. By default this is one.

Use the Interval Loop Start, configure it to start from 1 in 1 increments as an integer. After this node use a Row Filter node, and choose to "include by row number", then choose flow variables tab and select RowRangeEnd as "loop value" for the variable. Now complete your loop with a Variable Condition Loop End. Choose the Loop Value as the variable and for it to finish, select "=" and the number of the last row. You can do this manually for now, or ultimately you can make it all automated by using earlier in the workflow a GroupBy node or Stats node to calculate number of rows and then use TableRow to Variable to convert the total row numbers into a variable which you could feed into here.

Apologies, I was doing this from the top of my head.

Just tried it out, you only need a normal Loop End node, not a Conditional Variable Loop End.

The end iteration number is specified in the Interval Loop Start, so if you have 20 rows, choose 1 as the start and 20 as the finish in the interval loop start, and 1 as the increment. And remember to choose Integer as the type.

Wow. Thanks for your help. That works like a charm. I also tried the proposal for counting the values within the fields, but neither the Statistics nor the Group By node provide me with a row count. So I used a Java Snippet to count the rows, than a Column Filter to get the column with the row counts and a Java Snippet Row Filter to get the last column.

No everything works nicely. Thanks for your help again.

Glad it worked out.
You can do a row count with the groupby node by not choosing to groupby any column, and then choosing to aggregate by any column and choosing count as the aggregation type. I am somewhat averse to the snippet nodes so that’s how I do it!
Simon.

I have an issue that is similar to this forum topic.

I'm trying to create a loop within a loop for partitioning purposes. The outer loop cycles through 100 different random seeds that is passed to the partitioning node. The inner loop cycles through x number of times so that the partitioning of data is always different. However, with my current workflow, the partitioned data is always the same and does not vary.

If I remove the outer loop of cycling through random seeds, the inner loop works as intended. I attached a snapshot of my workflow.

Are you setting the seed in the partition node based on the variable from the outer loop?

If so it will always produce the same partition in the inner loop during that outer loop iteration.

For example, outer loop iterates over the following values: 1, 2, 3, 4

The counting loop is set to 10. It will run 10 times for the value 1, 10 times for the value 2 etc.

So you will make a partition 10 times with the seed 1, 10 times with the seed 2 etc. Therefore you will get 4 sets of 10 identical seeds.

What do you need the inner loop for? Or rather what is it you are tyring to achieve. If you just want to investigate the stability of the model to different partitioning you can just use the counting loop, set no static seed (it will choose a new random seed each run) and collect the results.

Swebb, I see what you're saying about the outer loop and yes, I was passing the seed into the partition.

I'm trying to measure the stability of a model by looping various partitions of the same seed. And once the best model with that particular seed is found, I'd like to test that model using that seed on new data.