KAP Execution use case best practice

Looking for the current “best practice” approach, given there are a plethora of ways how to achieve stuff currently, and some seem more buggy or less buggy, some seem less intended or wanted (batch execution moved out of core functionality):

I want to execute

  • within KAP (no HUB, remote etc., same machine only to keep it simple)
  • other Knime workflows from one primary workflow
  • sequential but also parallel execution of different! workflows
  • I do need to pass at least 1 variable (e.g. a local file path/string to load information from that file; or a database name to load information stored in a DB)
  • i need to be able to ensure that the temp directory is on a certain drive because some workflows will be fine 200mb RAM, but some workflows will need to swap to disk
  • the executed workflows should be saved and possible to inspect afterwards
  • need to be able to load custom database drivers (jdbc)
  • I should be able to interrupt the execution of the caller workflow if certain callees fail to complete

the last part does not necessary need to be done by Knime but I can work with some job log/queue which is marked as completed by the called jobs itself.

So far, I am only aware that the batch execution, especially with the method to pass exported .knwf files as input, properly allows for saving the executed workflows back.

obviously, not ideal to run another process/shell from within Knime to trigger the batch execution. Hence my question.