This is an example for computing explanation using LIME.
An XGBoost model was picked, but any model and its set of Learner and Predictor nodes can be used.
Hey everyone,
I realized that if you want to understand what the view does before downloading a workflow then it would a bit tricky.
That is why I referencing this video I shared on Twitter here:
This view is designed to browse LIME explanations as tiny bar charts (small multiples).
While browsing them you can also keep in mind the distribution of confusion matrix classes or some feature of your interest (alcohol and sulphites of wine).
This view is hardcoded on the wine dataset but it makes a good example to visualize explanations interactively.