Model Monitoring workflow node

Hi KNIME community,

I was going through a beautiful webinar on “Integrated Deployment :- How to move data science into production” on KNIME TV available at youtube(
In this Michael Berthold(presenter) use a workflow for demonstration which he mention that it is available on KNIME HUB ( but somehow i am not able to find the same.
I am currently working on similar kind of problem (specially the monitoring and re-train of the model part and deploying the chosen model).The above mention workflow can be very useful to me.

can someone share the workflow? it will be very helpful.

P.S :- Sorry, i am not sure this forum is suitable for this kind of question, please do let me know . i will refrain myself for posting such question in future,if required.

Wizard Dk

1 Like

Hi @Wizard_dk -

You may want to check the ongoing Integrated Deployment blog series by our data science team:

Right now there are three different posts that go into detail of how everything works, with links to several example workflows.

With respect to the particular component highlighted in Michael’s video - the one about monitoring and re-training - I believe that may not be published yet. Let me double check internally and see what I can find out.

1 Like

Thanks @ScottF for prompt reply.
I am going through given series of blog :slight_smile:
Yes, please check if the monitoring node and other nodes are published.

Wizard Dk

Hi @Wizard_dk,
at the moment we do not have an available Component for the monitoring part yet.
We are working on it but I cannot guarantee any deadline.
Sorry about that. If you use Integrated Deployment you should be able to re-execute pieces of workflows on demand. Something like:

  1. Train your model on train set (generic Learner node)

  2. Score model on test set (generic Predictor node)

  3. Capture Scoring (generic Predictor node) with Integrated Deployment (node 1 and 2)

  4. Deploy Scoring as REST API on KNIME Server via Integrated Deployment (Deploy node)

  5. Capture from point 1 to point 4 (Learner node + Captured Predictor Node + Deploy node) with Integrated Deployment (node 1 and 2)

  6. On a separate workflow query for new data for which you have ground truth (maybe from a frequently updated database with some timestamp column)

  7. Call Deployed scoring model (point 4) via Call Workflow node

  8. Measure performance (and optionally plot it in a line plot)

  9. Check if performance is below a threshold you decide

  10. Execute previously captured workflow (point 5) to retrain workflow and also redeploying it

The cool part is that with point 10 by calling this main workflow you are executing a chain reaction to repeat from 1 to 10 and the whole process restarts.

It might be a bit tricky to wrap your head around this but it should do the job.
I gave for granted you have access to a KNIME Server. If you don’t it should still work locally.
Just replace the Deploy node with a Workflow Writer node.
Let me know if you have any questions.



Thanks Paolo
I am following your Integrated Deployment Blog Series
I am doing the similar thing as suggested here :slight_smile: like comparing model performance metric with defined threshold and plotting metric on line plot.
Things are smooth for now, will let you know if some help required.

Cheers !!
Wizard Dk


While you wait for the model monitoring component (on its way),
have you checked out the new XAI View?



1 Like

Hi @paolotamag,
Good to hear that Model monitor is on its way !!:slight_smile:
I was also able to implement something similar to your suggestion like scoring from captured score pipeline then do monitoring over it then showing graph for model performance over time … retraining the model on user wish from capture train pipeline and deploying new model (model selection) as per user wish :slight_smile:

XAI looks cool. I have tried these things on python. Good to have them on KNIME. Will definitely going to use them in future!!


1 Like

Can you share your flow?

1 Like

Hi Daniel

Sorry, somehow i missed your post.
That workflow was develop on office premise and I guess we can not share.

I just follow this series
and Paolo’s comment .

if you have some difficulty in some parts then please do share here.
I will be happy to help.

Wizard Dk

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

Hello there, we just published new Verified Components for monitoring of a classification model. It works both for binary and multiclass classification but the model needs to be captured within the production workflow via integrated deployment.

Until we can load the external deployment workflow as an integrated deployment connection you have to recapture the deployment workflow within the same workflow using the components. If you used the AutoML component that should be super easy. More information in the component descriptions and example workflow. A blog post should also out soon.


1 Like

Hello there,
a blog post on the topic was published today: