SHAP values make no sense

Hi,
I was building a workflow to train an H2O Gradient Boosting Learner and predict SHAP values with it.
I don’t know what I changed, but now some of the values generated by my workflow are way out of tolerance, meaning that some are below or above -1.0 / + 1.0 which of course doesn’t make sense.

Any suggestions on what I could do to fix it? What information do you need to help me?
The SHAP loop node itself receives 100 samples to explain, 100 samples to permutate from, and it’s explanation set size was set to 100, too.

Hi @uie63112 -

Any chance you can upload your workflow, along with some data, so that someone else can try to reproduce the problem? (Dummy data is fine if your actual data is confidential.)

How far outside of the expected range are your SHAP values, and how are you computing them? There’s some nice examples on the KNIME Hub that you could cross reference.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.