Get the current iteration number and select the current file read

Hi guys, 

I'm a new user of KNIME and I have some problems with my workflow. Can you help me, please? 

 I'm reading a number of files .xlsx from a folder using

 List File node -> Table row to Variable loop start -> XLS Reader -> do something with another external table -> end loop.

My task is to move at each iteration the file just read to another folder, in order to separate the data that are not used from the other one. I've tried with the copy/move files node, wich is linked to the list files through the string to URI node, but obviously it moves away every file at the first iteration. 

So my question is: Is it possible to get the current iteration number and then move the corresponding file?

Thank you so much for the help



Hi Annalisa,

yes, it possible.

The Table to Row Variable Loop Start node generates a workflow variable, called URL, referencing the specific file that is being processed on each iteration.

You can extract the URL variable from the workflow variable output port and send it to a Variable to Row Table node, then to a String to URI node and finally to the Copy/Move node to move that file to a different folder.

One aspect to take care of is to ensure you move a file only when the central part of your workflow is done with it. To do that, you can connect the workflow variable outout port of the last node before the loop end --> to the workflow variable input port first node of the section that moves the files. In this way the portion that moves the file will wait for the previous one to complete before doing its job.

Hope this helps, otherwise please feel free to ask again.


I have just started using Knime for web crawling workflows. I have been able to use the Palladian Http Retriever and Parser,  recursive loop and XPath successfully.

I have tried to execute a similar workflow with the Selenium nodes of WebDriver Factory, WebDriver and Find Elements. What I have not been able to accomplish is using a loop to iterate through each set of URLs I extract. 

Does anyone have any suggestions for this problem.

Thank you in advance.