ROC curve of different models

Hi,

I am creating different models to predict a field. for example i am using decision trees and neural network. I have seen somewhere that they have compared the prediction of the prediction with a ROC curve. can anyone help to explain how to compare prediction of two different model using a roc.

Hi Shannon,

The ROC curve plots the False Positives vs. the True Positives on a test. The test set is sorted in descending order by the P(class=1). So, a given point on the curve shows how many false positives you have had in order to get a certain number of true psoitives. You would wish to have very few FP while correctly classifying your examples (TP), this means that the higher to the left the curve is, the better the corresponding model is.

The AUC (area under the curve) is a measure of how much a curve has the desired property. It is shown on the lower right side of the curve plotted by the "ROC curve" node or given as one of the options when you right click on the node. AUC can be interpreted as the probability of given positive and negaative examples being porrectly classified by the model, the higher the better.

Hi Iperez,
Thank you for your explanation.

I have seen ROC curves showing for different models. For example i'd like to create a ROC curve as in the attached picture. Is that possible?

 

Sure, you can select more than one probability column in the ROC node's dialog. Each chosen column will result in one line in the plot.

Hi ,

     Can you please explain me the process of comparing different models and getting the ROC curves for it .. what node is used to join  the class probability columns from the different predictors into one table  ? I could understand the process but unable to get the node required to do the process ..