I’ve built a simple Neuronal network with 2 HL. The two activation functions are Relu and Sigmoid for the Output layer. I’ve normalized the Data with the Z-Score normalization and there are 14 inputs in total. When I execute the learner node the loss function seems pretty good but I’m not quite sure about the accuray.
In general i thought that with numeric predictions the accuracy would always be 0 but, but my accuracy is moving around 0,25 and 0,3. Can it be due to the amount of 0’s that are beeing predicted ?
Because in my dataset there are a lot of 0’s in my target column (sales).
But now to my main question:
Why do the predictions only have a number between 0 and 1 even though i let the prediction run through the denormalizer node ?
Hi @Vincentsoy,
Watching accuracy on a numeric prediction network is funky. I agree.
I just tested a model I trained awhile ago and also get an appreciable accuracy. This comes down to how keras calculates accuracy. Even when training classification models the outputs are still numeric values converted to class representations. So accuracy in keras is a defined as a radius around the target value. This is why you see a non zero accuracy value instead of zero, as you might expect, if it was truly an = calculation to determine accuracy.
The reason your accuracy is “high” is probably, as you suggest, because you have a lot of zeroes in your dataset that your model is good at predicting.
Here’s what my accuracy plot looks like on a dataset with a fairly well distributed target: