SHAP/KNIME Machine Learning Interpretability Extension

Does anyone have an example workflow that incorporates SHAP nodes and any other workflow examples that use nodes in KNIME Machine Learning Interpretability Extension, aside from those published here:

I’d love to get a better idea on how to use, visualize and interpret SHAP output.

Thanks very much in advance !


Hi PaulD,
check this out

It is a preview as the workflow is to be finished and moved to the example server.

Video of the interactive composite view:

The interactivity should go as follow:

  1. Select explanations in bubble chart
  2. See distributions of shapley values in violin plot
  3. See the ICE curves of those predictions in PDP/ICE view
  4. Explore where the surrogate decision tree positions those predictions.

The color in the bubble chart uses the predicted class by the model.
You can also change what feature is being used both in the pdp and in the bubble chart.
Is a bit advanced but this way you can visualize shapley values for group instances
and see how those instances are described by the tree and the pdp plot.
This is still in preview so please if you have suggestions go ahead.

To use the interactivity you need to open the in view dialogue of the Violin Plot (top right corner) and select “Show Selected rows only”. This should be fixed in 4.0.1

I will update the link once the workflow is finished and on the example server.



Hello Paolo,

Thanks a lot for pointing to this workflow showcasing a MLI composite view. But how can I adapt this workflow to a regression problem? By min-max-normalizing and binning the ground truth and the related prediction in advance?

Thanks a lot in advance!