I’m currently working on classifying X-ray data using the Cat/Dog workflow example as a reference, and I’ve encountered an issue that I could use some assistance with.
I’ve adhered to the default settings and used 4000 images for both the training and testing datasets, following the configuration used for the Cat/Dog CNN workflow. However, I consistently encounter the following error message:
“Execute failed: Node training data size for network input/target ‘input_1_0:0’ does not match the expected size. Neuron count is 67500, batch size is 128. Thus, expected training data size is 8640000. However, the node training data size is 3150000. Please check the column selection for this input/target and validate the node’s training data.”
I would greatly appreciate your guidance in resolving this issue. Any suggestions on how to adjust the CNN network configuration or other parameter selection to fix the input/target configuration issue would be extremely helpful.
I found that you faced the same issue in the past and you managed to solve the issue.
I’m using the image resize node to unify the size of the pictures I’m using to train the model, but whatever settings iI’m using, I’m still getting the same error message.
appreciate your valuable feedback and insights.
“Execute failed: Node training data size for network input/target ‘input_1_0:0’ does not match the expected size. Neuron count is 150528, batch size is 128. Thus, expected training data size is 19267584. However, node training data size is 7124992. Please check the column selection for this input/target and validate the node’s training data.”
@Kathrin , if you can also share your thoughts, that will be highly appreciated.
I’ve just investigated the issue a bit and the problem is connected to the fact that the images you are trying to classify only have X,Y dimensions. They lack a “Channel” dimension which is needed to correctly input the images in the Keras Input Layer, and later on for the Learner to understand the proper number of neurons to train.
This is not just a requirement of the KNIME Keras extension but a generalized requirement to train DL architectures. You can read more here, in the section “Images as 3D Arrays”.
This is how the image dimensions you’re using are:
Many thanks indeed for the feedback, it was very helpful.
at least, now we know what is the problem.
I will try to find a way to convert the images from 2D to 3D, I might use the Keras to do this conversion, as suggested by Sreya_22, in his post at Kaggle
I will update this post once I manage to find the solution.
Many thanks indeed for your and @hannan_saleem support in troubleshooting and explaining the root cause of the problem.
This experience serves as a clear, and real-life example of the strength and efficacy of the KNIME community. Beyond providing a platform for collaboration, KNIME community offers solutions and ready-made components that empower users to overcome their challenges.
Big thanks to everyone who contributed to solving this issue.