Execute failed: Node validation data size for input/target 'input_1_0:0' exceeds the expected size

Hello Community,

I’m building a Garbage Detection Workflow (Image recognition) with KNIME and Keras. As an Example/Tutorial is used this Workflow provided by @stelfrich and also a second Workflow provided by @ScottF .

When I execute the Keras Network Lerner Node I receive the following error:

WARN Keras Network Learner 0:10 C:\Users\svn.conda\envs\py3_knime_dl\lib\site-packages\keras\engine\saving.py:292:
UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
ERROR Keras Network Learner 0:10 Execute failed: Node validation data size for input/target ‘input_1_0:0’ exceeds the expected size. Neuron count of this input/target is 784, batch size is 32. Thus, expected validation data size is 25088. Please check the column selection for this input/target and validate the node’s validation data.

This is just an example, I tried several configurations with the Keras Nodes (less or more Neurons, less or more batch size etc.).

Where can I/need to have to change the configuration?

BR,
Sven

Hi @sven-abx,

What is the exact size of your input images? The error message suggests that they are not 28px*28px (=784px), which is the required size for the network. If you want to feed bigger images to the network, you have to adapt the parameters of the Keras Input Layer node.

Best,
Stefan

Hi Stefan,

Imagesize is 512x384 and the Dataset includes 2527 Rows.

ERROR Keras Network Learner 2:10 Execute failed: Node training data size for input/target ‘input_1_0:0’ exceeds the expected size. Neuron count of this input/target is 196608, batch size is 19. Thus, expected training data size is 3735552. Please check the column selection for this input/target and validate the node’s training data.

br,
sven

Hi @sven-abx,

Can you share your workflow with some sample images so that I can take a look at the configuration of the learner node?

Thanks,
Stefan

Hi @stelfrich,

yesterday I was able to run the workflow, also the Kerals Learner Node.
512x384x3 = the number of Pixels. Maybe this was the error, but I couldn’t check it because: “ERROR Keras Network Learner 0:10 Execute failed: Selected Keras back end ‘Keras (TensorFlow)’ is not available anymore. Please check your local installation.
Details: Installation test for Python back end ‘org.knime.dl.keras.tensorflow.core.DLKerasTensorFlowNetwork’ timed out. Please make sure your Python environment is properly set up and consider increasing the timeout (currently 25000 ms) using the VM option ‘-Dknime.dl.installationtesttimeout=’.”

Yesterday I saved the workflow, today I loaded the workflow, reset some nodes to rerun the workflow.

Increasing the timeout worked :slight_smile:

looks like the inital problem was solved. #fingerscrossed

I’m going to use this thread also as a documentation, maybe someone is running into the same issues.

ERROR Keras Network Executor 0:32 Transmitting input data to Python failed.
ERROR Keras Network Executor 0:32 Execute failed: An error occured during execution of the deep learning network. See log for details.

Log:

2020-04-16 12:33:01,609 : ERROR : KNIME-Worker-14-Keras Network Executor 0:32 : : DLAbstractExecutorNodeModel : Keras Network Executor : 0:32 : Transmitting input data to Python failed.
2020-04-16 12:33:01,629 : ERROR : KNIME-Worker-14-Keras Network Executor 0:32 : : Node : Keras Network Executor : 0:32 : Execute failed: An error occured during execution of the deep learning network. See log for details.
java.lang.RuntimeException: An error occured during execution of the deep learning network. See log for details.
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.handleGeneralException(DLAbstractExecutorNodeModel.java:658)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.executeInternal(DLAbstractExecutorNodeModel.java:558)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.execute(DLAbstractExecutorNodeModel.java:290)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:571)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1236)
at org.knime.core.node.Node.execute(Node.java:1016)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:557)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:218)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:124)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:334)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: java.lang.RuntimeException: Transmitting input data to Python failed.
at org.knime.dl.python.core.DLPythonAbstractCommands.setNetworkInputs(DLPythonAbstractCommands.java:327)
at org.knime.dl.python.core.execution.DLPythonAbstractNetworkExecutionSession.executeInternal(DLPythonAbstractNetworkExecutionSession.java:140)
at org.knime.dl.core.execution.DLAbstractNetworkExecutionSession.run(DLAbstractNetworkExecutionSession.java:162)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.executeInternal(DLAbstractExecutorNodeModel.java:554)
… 14 more
Caused by: org.knime.python2.kernel.PythonIOException: An exception occured while running the Python kernel. See log for details.
at org.knime.python2.kernel.PythonKernel.getMostSpecificPythonKernelException(PythonKernel.java:1550)
at org.knime.python2.kernel.PythonKernel.putData(PythonKernel.java:844)
at org.knime.dl.python.core.DLPythonDefaultContext.putDataInKernel(DLPythonDefaultContext.java:194)
at org.knime.dl.python.core.DLPythonAbstractCommands.setNetworkInputs(DLPythonAbstractCommands.java:324)
… 17 more
Caused by: org.knime.python2.extensions.serializationlibrary.SerializationException: An error occurred during serialization. See log for errors.
at org.knime.python2.serde.flatbuffers.Flatbuffers.tableToBytes(Flatbuffers.java:148)
at org.knime.python2.kernel.PythonKernel.putData(PythonKernel.java:829)
… 19 more
Caused by: org.knime.python2.kernel.PythonIOException: FlatBuffers: cannot grow buffer beyond 2 gigabytes.
at org.knime.python2.util.PythonUtils$Misc.executeCancelable(PythonUtils.java:265)
at org.knime.python2.serde.flatbuffers.Flatbuffers.tableToBytes(Flatbuffers.java:145)
… 20 more
Caused by: java.lang.AssertionError: FlatBuffers: cannot grow buffer beyond 2 gigabytes.
at com.google.flatbuffers.FlatBufferBuilder.growByteBuffer(FlatBufferBuilder.java:133)
at com.google.flatbuffers.FlatBufferBuilder.prep(FlatBufferBuilder.java:179)
at com.google.flatbuffers.FlatBufferBuilder.startVector(FlatBufferBuilder.java:349)
at com.google.flatbuffers.FlatBufferBuilder.createByteVector(FlatBufferBuilder.java:471)
at org.knime.python2.serde.flatbuffers.inserters.BytesInserter.createColumn(BytesInserter.java:108)
at org.knime.python2.serde.flatbuffers.Flatbuffers.tableToBytesInternal(Flatbuffers.java:274)
at org.knime.python2.serde.flatbuffers.Flatbuffers.lambda$0(Flatbuffers.java:145)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Hi, I have the same issue with this error statement: Execute failed: Node validation data size for input/target ‘input_1_0:0’ exceeds the expected size. Neuron count of this input/target is 784, batch size is 32. Thus, expected validation data size is 25088. Please check the column selection for this input/target and validate the node’s validation data.
I would be thankful if you share how did you solve it. Please take into account that, I have 8000 dog/cat images with different sizes. (they are not the same sizes)

Hi @stelfrich, If I have different 8K images how can I fit the network size (64,64,3)?

hello @agamehdi,

you can use the Image Resizer Node to get your images to the same size (64,64,3). after all your images have the same size, the error should disapear.

greetings,
sven

1 Like