@USCHUKN1ME I tried to build a system you might be able to expand and use.
The idea is to have a workflow that would iterate a number of times (v_number_of_intervals) with a set interval (of seconds) between (v_seconds_interval) them. The workflow looks for a trigger_file.csv if this file has changed from the last time the loop has run a component would be executed. If the file has not changed nothing would happen.
Admittedly not the most elegant way but you could build a small automatic execution machine with this as long as your workflow does run.
You can monitor the effect. Every time the trigger file changes the component gets executed. Here there are 10 iterations. At the second iteration the component executes, then nothing happens and then at the iterations 7, 8 and 9 the file has changed and the component executes again (the changes are simulated in the m_001 workflow - in a real live scenario they would occur by outside factors).
This might serve as an example how such a thing could work. Obviously it would need some work and one can debate if this is the best way to go forward. I would recommend using the KNIME server instead
(please download the complete workflow group)
This job m_001 is there to prepare the setting and simulate the changing tigger_file.csv
@USCHUKN1ME this very much depends on your business model and what you want to do with it. The KNIME server is much more than one workflow - I have a collection about what is possible there:
I know I’m very late to the party and maybe given all the replies after this question my reply might not be needed anymore. Still I think it fits to this question. About “times loops” please look at the vernalis community extensions. Yes, they are in the cheminformatics/life science space but also have some general use nodes like timed loops:
Maybe these can be helpful for your workflow or workflows in the future.
Another interesting function could be this component by @gab1one that would wait for a file to change. In order to bring this into a ‘looping system’ - some changes might be necessary. In my example I tried to do a small such system with generic KNIME nodes.
Have to figure that out yourself. Only really used one of the timed-nodes once. My comment bascially was: I know these exist, have a look maybe they help with your problem.
I have built a workflow using @mlauber71 example for the WAIT NODE. I disconnected temporary the WAIT-NODE which waits for a file change (batch file / Windows task scheduling). The new workflow works, which means the DB query NODE restarts every time. Everything works so far, except for the output of the dashboard which is generated by the component node. I used to save the dashboard graphic manually as a *. html file ona Server drive after Execute & Open the view, which the employees can
then access. I didn’t find anything in the dashboard configuration where you can save this automatically.
@USCHUKN1ME it is more like 2,574 EUR in german notation. More like two and a half Euros per hour …
And the actual number very much depends on how powerful the AWS server would be, if shared ressources would be OK for you (not shared data of course) or if you would need your own dedicated super-CPU with super-fast SSD …
We are at the beginning.
And the workflow described here is almost the first where automation is needed - without having a dient PC (my) constantly switched on.
I am not aware of a detailed explanation but I use the KNIME server in an on site setting. But maybe @Iris or @ScottF can weight in and give some hints.
And I think @armingrudd has some experience with KNIME and AWS.
From what I see there is a free trial period so you might just set it up.
@knime team
I would be interested in detailed Instructions for setting up Knime Server in Azure. So if there is any video,… doing this then please let me know
thanks and best