when I evaluate my prediction model with cross-validation, there is quite a variance in the results of the scorer (for example in accuracy).
Is there an easy way to create a loop that calculates the standard deviation of the accurarcy, when I perform the cross-validation several times?
Hi @elstef -
What about taking results from the bottom port of the X-Aggregator node, which contains the error rates across all folds, and feeding that to a Math Formula node? Then you could use the
COL_STDDEV() function to calculate the statistics for all the results.
EDIT: Wait, I think I misunderstood your question. Let me play with an example workflow for a bit and revise.
OK, here’s an example I came up with. It implements a loop to run cross validation an arbitrary number of times, then collects the accuracy for each execution and calculates the standard deviation. This is based off of the /04_Analytics/11_Optimization/01_Cross_Validation_with_SVM workflow from the EXAMPLES server, and I just added in the loop and extra calculations.
Let me know if this is on the right track.
Example_Cross_Validation_with_SVM_and_Looping.knwf (66.9 KB)