What I want to achieve
- Evaluate model performance using per-class and overall accuracy metrics.
What’s happening
- Confusion matrix shows 0 false positives and 0 false negatives.
- Per-class metrics (Recall, Precision, F‑measure, Specificity, Accuracy) are all
1.0
.
- However, in the “Overall” row, Accuracy and Cohen’s Kappa appear as red question marks (
) — not calculated.
I’ve verified that class names match exactly in both columns:
-
Label (Ground Truth): HEALTI
, FAIL_S
-
Prediction: HEALTI
, FAIL_S
Troubleshooting steps I’ve taken
-
Checked for typos or mismatches in class labels.
-
Ensured no missing values in either column.
-
Tried manually setting class names in the Scorer dialog.
-
Validated data types and used Table Validator to confirm types are correct.
Question
- Why is the “Overall” row failing to calculate even though per-class metrics are perfect?
- Is this due to the custom class names (
HEALTHY
, FAIL_SOON
)?
Thanks
Snehal
Welcome to the KNIME Forum, Snehal_Balghare!
Great to have you here — looking forward to your contributions and discussions.
The model shows perfect scores across all metrics, which is quite rare with real-world data. It might be worth checking for possible data leakage, train/test split issues, or label contamination. If you could share the dataset and your KNIME workflow, we’d be happy to take a closer look together.
Best,
Alpay
Hi Alpay,
Thank you for the warm welcome and your helpful suggestion!
You’re absolutely right — such perfect scores raised a red flag for me too. I’ll definitely double-check for any possible data leakage, train/test split overlap, or label contamination as you mentioned.
In the meantime, I’ll prepare a simplified version of the dataset along with the relevant portion of my KNIME workflow and share it here for review. I really appreciate your offer to take a closer look — that would be a huge help!
Thanks again,
Snehal
train_FD001.txt (3.4 MB)
1 Like