Hello,
I dont understand how this model can be explain.
There’s only a single row to explain. If you change of row shap, shapley value … change.
Have I missed something?
Thanks
Hello,
I dont understand how this model can be explain.
There’s only a single row to explain. If you change of row shap, shapley value … change.
Have I missed something?
Thanks
Hi @Brain,
In the below workflow, the SHAP, Shapley and LIME techniques are used for local explanations of the model - to explain individual instances (explain the contribution of each feature for a particular prediction).
The Shapley values can be combined into global explanations by calculating the SHAP for every instance, below are some of the global explanation techniques:
Here is the example of SHAP Dependence Plot performed using KNIME. I would suggest going through the KNIME explainable-ai booklet or the interpretable-ml-book by Christoph Molnar to understand the concept in more detail.
Also, check out the XAI Space to find more examples.
Best,
Keerthan