Hi. I want to use the x-validation loop with a neronal network.
I figured out that the mean squared errors are calculated in the X-Aggregator node as followed (directly copied from the code):
for (DataRow row : in) {
RowKey key = row.getKey();
DoubleValue target = (DoubleValue)row.getCell(targetColIndex);
DoubleValue predict = (DoubleValue)row.getCell(predictColIndex);
double d = (target.getDoubleValue() - predict.getDoubleValue());
errorSum += d * d;
r++;
m_predictionTable.addRowToTable(row);
subExec.setProgress(r / (double)rowCount, "Calculating output "
+ r + "/" + rowCount + " (\"" + key + "\")");
subExec.checkCanceled();
}
errorSum = Math.sqrt(errorSum);
As I see it it is wrong. Isn't the mean sqaure error?:
mean(y(expected)-y(predicted)) * (y(expected)-y(predicted)) )
Here they calculate:
(sqrt(sum of all(y(expected)-y(predicted)) * (y(expected)-y(predicted)) )) ) /number of datasets
or is there a sense? First I thought it is the root mean square error.
But the in the root mean square error the root is calculated after the mean.