Fine-tune VGG16

In this workflow we are fine-tuning a ResNet50 ( First, we download the network from Keras using the DL Python Network Creator node. To implement the fine-tuning approach, we freeze the parameters of all network layers except for batch normalization layers. This is done to allow the normalization to adapt to the new data. Finally, we add a new trainable network head to output cat and dog probabilities (the original network head is already removed when loading the model from Keras).

This is a companion discussion topic for the original entry at

Hey there,
I just noticed that the workflow title as shown on the Hub does not seem to match the actually trained model (ResNet50 and not VGG16 as in the title on the Hub). However the workflow file has the correct name.