Issues in replicating Keras Transfer Learning Workflow

Hi everyone,

I have a question regarding a specific workflow created by the Knime Community, which can be found and downloaded from the following hyperlink:

https://hub.knime.com/knime/spaces/Examples/latest/50_Applications/31_Histopathology_Blog_Post/_legacy_version/

The workflow uses a Keras Transfer Learning to predict Cancer Type from Histopathology Slide Images. Here is a brief description of how the workflow works. The target variable was categorized into three different cancer type classes:

https://www.knime.com/blog/using-the-new-knime-deep-learning-keras-integration-to-predict-cancer-type-from-histopathology

I really appreciated the potential of this workflow, so I am trying to replicate it on a new dataset I found on the web with similar predictive purpose.
The main difference of my dataset is that the target variable I want to analyze is binary, since it only exhibits two realizations: the presence, or the absence, of a pneumothorax in each x-ray image of the dataset.

Because of this, in order to replicate the workflow on my dataset, I should first change the numerosity of neurons in the output layer from three neurons to two, and then theorically I could run the pre-trained model on my data. But here’s the problem: I have tried to reconfigure the Keras Dense Layer Node (which you can see in the following uploaded capture) by reducing the Units from 3 to 2 and by changing the Activation Function into a Sigmoid, which is more suitable for a binary target variable.


Then, when I run the Keras Network Learner, i get the following log error:

ERROR Keras Network Learner 0:233:232 Execute failed: Node training data size for network input/target ‘input_2:0’ does not match the expected size. Neuron count is 12288, batch size is 64. Thus, expected training data size is 786432. However, node training data size is 262144. Please check the column selection for this input/target and validate the node’s training data.

Lastly, I found one more interesting workflow description webpage which breaks each of the previous nodes into a Python code. Unfortunately I am not really good at coding so I could not solve the issue by myself:

https://www.knime.com/blog/transfer-learning-made-easy-with-deep-learning-keras-integration

I hope you can help me on this even if I know it is really complicated, you would save my project!

Thank you all in advance,

Nicola

I have tried to upload my workflow but the dimension is too big (107 Mb), if you can suggest me any alternative solution to upload it I think that would ease a lot your help!

Thank you again,

Hi @nikesamma,

It seems like the input image patches do not fit the input shape of the model. What are the dimensions of your patches and what is the input shape of the model? You can find out the input shape of the model by viewing the port content of the the deep learning model port or in the configuration dialog of the “Keras Network Learner”.

Your workflow is probably so big because it contains the images. Try resetting it and not saving the images in the workflow directory (if you did this).

Best
Benjamin

1 Like

Hi @nikesamma, are your images in a single color channel, like greyscale?
This would line up with the error you’re getting as the input layer is expecting 3 times more inputs than it is receiving.

Glad you’ve found the example relatable to your use case!

Hi @Corey, Hi @bwilhelm,

Thank you for your quick replies!
I think you are both right about the issue in the workflow: the input shape of the model [64, 64, 3] does not match with the shape of the input image patches, since they are all black and white images.

Now, do you have any idea how could I adapt the model shape to the patches one? Maybe through some additional Keras nodes or directly through the Python script?

Anyway I have been able to upload my workflow so you can directly see the troubles I am encountering. Before opening it you should install all Keras libraries in order to correctly visualize it.

Upload.knwf (2.6 MB)

Thanks a lot again, I hope to hearing from you soon

Nicola

While reconfiguring the first layer(s) to function with a single color channel is possible, I wouldn’t recommend it because you’d lose the benefit of the pre-trained weights in the VGG16 model.

Instead I’d convert your grayscale images to RBG images, with each channel equal, in a preprocessing step.
The easiest way to do that in KNIME(that I know of), is to use 2 Image Calculator nodes followed by a Merger node to create 3 copies of the image then combine them into a 3 channel version.

19%20AM

For more of a long term goal you could retrain a base model on a greyscale version of the ImageNet database that VGG16 was trained on and then transfer that model to your use case. Like in this paper:

1 Like

It worked!!! Thanks a lot you really saved me @Corey !!

Just one last final thing: I made some changes in the Keras Dense Layer configuration: since I am targeting a binary variable I reduced the Units in the Node from 2 to 1 and I used a Sigmoid Activation function instead of a Softmax. I consequently only included the “Pneumo” variable among the Training Targets in the Keras Network Learner (and no more “No Pneumo”, to avoid multicollinearity).
Moreover I changed the Categorical cross entropy Option into Binary cross entropy with the same reasoning.

Do you have any other suggestion of adjustments I could make before running the model on the whole dataset, which will surely take me ages?

Nicola

1 Like

I think that all sounds reasonable.
I like to check out the Keras Network Monitor when I start training a model like this, just to make sure the loss function is responding as expected.

Other than that, best of luck with your project!

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.