Cutoff threshold for binary classifier models

Hi,

I am trying to optimize a binary classifier tree ensemble model. It correctly predicts one class, giving me many true negatives, and does not correctly predict the other class, giving me a lot of false positives. I believe it assumed a threshold value of 0.5 to be classified as a positive since the ROC Curve shows confusion matrices for all thresholds. Does anyone know of a way to change this threshold to minimize those false negatives?

Hi @tamaraa21 -

Have you tried the Binary Classification Inspector node? It will allow you to adjust the threshold, and see how the confusion matrix and other metrics change on the fly.

1 Like

In this article I have a sample workflow with a Metanode that would try to find an optimal cut-off point using two metrics. H2O nodes would find their best cutoff.

In general a cut-off would very much depend on you business question and often your costs associated with making a wrong prediction.