SHAPLY Interpretation

Hi all,

I used KNIME - Shapley Values Loop to interpret the importance of the input features of the random forest model. We want to use this technique in KNIME to be able to determine how much the feature contributes to either outcome 0 or 1. Please suggest some visualization nodes to interpret the output.

Please refer to the workflow details in the attachment.
Shap_loop.docx (1.3 MB)

Appreciate your Support,
Thank you

Hello @Nishchay ,

welcome to the Forum!

What do think these examples from the hub could be useful for you?

Let us know if it worked!
Regards,
Dora

Hi Dora,

Thank you for answering, but I am not able to find the visual nodes (“Dependence Plot” node, “Post-Processing” node, and “Visually Compare Explanations”) in my KNIME. Can you please suggest a way to get these nodes?

Regards

Why do you want to use SHAP for a model which allows feature importance by default? Also you might want to have a look at that.

br

@Daniel_Weikert The FI generated by RF is on the training data. SHAP, etc. uses the test set to generate importance. There are cases where these two FI differ.

@Nishchay The “nodes” you name are actually components (user-created node collections). You can tell because they have a gray border. You can copy paste those components into your own workflow, but you cannot search for them in the Node Repository because they aren’t technically nodes.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.