We have a scorer model deployed as a service in edu hub and it takes about a second to turn around each request (which is very slow) when the model is executed from external program via REST. Also, knime seems to be not acceptiong connections after ~2000 calls throwing error "“The account job limit has been reached. No further jobs can be created.”
We want to explore if there is a way to deploy the scorer model locally instead of hub and have the communication established between knime model and external program. This should solve the network latency issues we are seeing connecting to edu hub.
Is that really for each request or just the initial. If it’s only the first, then it might be the executor context that needs to start. The startup behavior can be adjusted in the advanced settings of each executor context. Read more here:
If the response indeed is slow for each request, I suggest you download and assess the execution context logs:
And or debug the runtimes using the benchmark nodes from the awesome @Vernalis extension: