model stability

Hi, I’m just wondering what you think is a good way to test the stability of a model. I am currently taking the average of the results I get from 10 random partitions for the training, test, and cross validation set. Is there a better way to test the stability of the model?


I think a 10 fold cross validation is already a good starting point. With that you would test if your model is independent of the data partitioning, and if the model works well for small variations in the data set. You can also have a look if your error rate is quite similar for all “runs”, or if it’s widely spread what would indicate that already small changes in the data set have a huge impact on the model outcome. There is also a cross validation metanode in the node repository that you can check out:

Hope that helps!


1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.