I have adapted the “Cats and Dog” example workflow for image-classification with transfer learning from a pretrained VGG16, and so I am using a similar pre-processing for my images (downscaling and 0-1 intensity scaling), and it seems to work just fine so far
However, I have noticed that the keras library has a preprocessing function specific to the VGG16 model keras.applications.vgg16.preprocess_input which seems to do additional steps like subtracting the average RGB channel values of the original ImageNet training set.
I was wondering if that could be beneficial to the training.
I could put a python node to call this method but that would slow down the workflow due to the image conversion right ?
Any suggestion ?
PS: I am using the Keras integration so I have almost no code in my workflow
Hi @l.thomas,
I don’t think you’d see any significant improvement using that Keras function. In your use case you’re using the pre-trained convolution layers to generate your image features, which are then used in the final classification layers. Which are trained.
The approximate pre-processing you’re doing already is probably enough to utilize VGG16 in that way. I imagine that Keras function would be more useful if you wanted to use the entirety of the original VGG16 model for its original classification purpose without any further training.
I’d be really curious to hear how it goes if you decide to try it out though. Or if any others on the forum have used that function before.