After training a classification model, I can use a scorer to get the confusion matrix and associated metrics. It assumes a threshold of 0.5 to consider it to be classified as positive.
What if I want to experiment with different thresholds so I can tweak the predictions to minimize False Positives or False negatives, for instance?
Since ROC Curve has confusion matrices computed for all thresholds, I’d assume that it is possible - but I couldn’t figure how to easily output the confusion matrix corresponding to a given threshold.
Did someone else try this already?