Deep Learning Network Accuracy Drop


I created a simple neural network and obtained this type of accuracy graph in the Keras Learner Node.
Is there a reason why the accuracy can drop so fast suddenly this way? There are about 12,000 images in the dataset and the batch size is 32.

Hi @june,

quick question - have you activated the checkbox “shuffle training data before each epoch” in the learner node?

Otherwise I might have an idea :slight_smile:



Yes,@kathrin I have activated that node. I’m not so sure what else I can check for, perhaps the neural network itself. Do you have any ideas?

@june I would try to use a larger batch size, e.g. 128, and see whether that helps.

In addition can you please share a screenshot of the loss tab of the view? Do you see the same pattern there?

Hi @Kathrin

This is with the bigger batch size of 128. Seems to improve a bit, but I’m looking for validation accuracies of 90% +

@June good to see that the bigger batch size solved the problem with the dropping accuracy. To reach a higher accuracy you probably need to optimize the network structure.

What kind of problem are you trying to solve?
What kind of network structure are you using currently?
Are you using a pertained network as starting point?


1 Like