I have constructed a demo that uses a (permanent) loop and executes a job at a certain time. You could also have a version that would so somthing if there is a change. Question is would this be a stable production system. It was meant to demonstrates what can be done.
you will need some sort of indicator. That might be a timestamp or an ID (or both) that you would store and then make sure that you only transfer the newest ones. One possibility is to use unique Primary Keys (if that is what your database is all about).
Hi @HY_Z@mlauber71 , while the permanent (infinite) loop can be an option, it’s not my preferred option. I would rather have the workflow run every 30 secs. This can be achieve with a Knime server as per @mlauber71 's suggestion, and if you don’t have access to a Knime server as is @HY_Z 's case, you can run the workflow as a command line on your station and schedule the job to be executed via your OS’s scheduler (Windows scheduler, UNIX/Linux/MacOS cron)
The problems I have with running the job via an infinite loop are:
o Your Knime AP has to be opened for it to run
o It can be resource intensive memory-wise, with Knime keeping track of the iterations
o Should there be any problem, such as system running low on resources, and the workflow crashes, you need to manually restart the workflow. Imagine this happening in the middle of the night. The scheduler will run the next schedule automatically