Interactive MLI Composite View

This worflow will show how to use the interactive views of JavaScript nodes to visualize in a single Composite View a number of Machine Learning Interpretability (MLI) techniques: Shapley Values, Partial Dependence, Individual Conditional Expectation (ICE) curves and Surrogate Decision Tree. - Shapley Values, - Partial Dependence, - Individual Conditional Expectation (ICE) curves - Surrogate Decision Tree. Computing SHAP explanations takes time. Use the Component dialog panel to define how many explanations should be explained and which is the class of interest. To open the Component View: Right click: "Execute and Open View" To enter the Component: Right click : "Component" > "Open"

This is a companion discussion topic for the original entry at

This component generates an interactive composite view as shown in this tweet:

I got this email for a knime user:

Hi Paolo,

I’ve been attending the spring Knime summit. I’m a pretty new user and I have a specific question

to submit you. In one of your workflows that may be downloaded from the site you plot shap values.

I don’t find the way for doing that. I’m looking also a way for plotting the summary plot of all shap values (see attached file). Is it possible to share these widgets if they exist or to give me some infos for doing this ? I asked this question in the Q & A session today but time missed for an answer.

Best regards,


I answer here below:

Hi Stephane,

i would try to use violin plots and bubble chart together.

Each violin would be showing the shapley values for a different feature.

Each bubble is instead mapped to a different explanation.

Then select on the bubble chart to see how the violin plot change in shape and height.

We do not have yet the exact same charts you sent, but this is something quite close to what you are looking for.

There is an example here with a Random Forest predictor (current workflow example of this thread):

And a tweet about it (already mentioned in this thread):

Then there is another here using a PMML Predictor (last component):

Those composite views also show more charts, like the partial dependence plot and the surrogate decision tree.

If you scroll down on those same pages you should be able to find forum threads where you can ask more questions.

When KNIME 4.2 will be out this summer you will be able to use Shapley Values loops and other KNIME Machine Learning Interpretability (MLI) frameworks on any predictor node from the same workflow using an new node called Workflow Executor.
This is part of the new extension Integrated Deployment ( where you can package and reuse a piece of workflow where ever you want.

This is extremely useful for MLI because this way you won’t need to change predictor node or feature engineering nodes every time you want to compute explanations for a new model.

With 4.2 I will be able to share components with users that will work with any model, not just the predictor I decide to use in the example. All you need to do is to package using the new purple nodes your model so that raw data goes in and predictions come out. You can already try this in the Nightly:



Thank you a lot Paolo for your exhaustive answer.

I’m a little bit confused on the inputs of the widget (‘explain prediction’) .
How can we plug this widget directly to a simple Random Forest workflow classification (without using table reader widgets) ?