I'm working heavily with SQLite. Sometimes I get the Message "Execute failed: database is locked" and the workflow stops
There is no concurrent access to the SQLite-DB! After restarting the node, the workflow continues...
Is there a possibility, to make an automated retry, so that the workflow won't be interrupted?
Thanks in advance,
sorry for the long silence. This is a great idea. I have opened a feature request to add a retry option to all Database Connector nodes which allow the user to specify a certain number of retries.
Hi MBoesing, in the meanwhile... :)
I had similar trouble with a Model reader that occasionally failed. So I put Try/Catch around it (see the appendix), that uses the second branch with some delay if the first fails. This way I always have two trys. Concerning your specific situation my example has two downsides: The Wait node on the alternative path is executed every time and the catch block waits for it to complete. This is no problem in my case because the whole workflow needs nearly an hour and I don't need to care about a few seconds more. The second one is that you maybe do not want to execute the same action twice if for example you have no unique constraint that fires within the alternative branch and force only one successful write attempt. But you might consider to play a bit with the different Try/Catch nodes. I could imagine that you create a workflow variable that indicates if the Try branch was succhessful and is set to falso until the DB node executed fine. In the alternative path, that needs to depend on the Try path, you check whether the variable is still set to true and skip the second DB node.
thanks for your suggestion, it's a good workaround until Tobias feature-request will be implemented and available!