Error whilst trying to run Tensorflow Network Executor

Hi,

Windows 10.

I am trying to work my way through some of the example workflows for Image Classification, when I execute the Tensorflow node within the "Mixing Deep Learning with XGBoost I am getting the following error :

ERROR TensorFlow Network Executor 3:17 Op type not registered ‘FusedBatchNormV3’ in binary running on MARK-PC. Make sure the Op and Kernel are registered in the binary running in this process.

I am running an Nvidia RTX3060 ti, with latest driver.

The DeepLearning environments were created using the auto creation and I can see in the Tensorflow Anaconda environment that CudaToolkit (10.1.243) and CUDNN (7.6.5.32) are included.

Can anyone point me towards where to check for this binary error as a non technical person. I tried running on Knime running on Linux which I have installed on a separate drive using the same hardware but with the same error message.

Thanks, Mark

@mgirdwood maybe you can tell us which workflow it was and also provide a log file in debug mode.

Hi - Sorry I should have added the log file directly

The workflow is the " Mixing Deep Learning with XGBoost" the version by Christian Birkhold (Mixing Deep Learning with XGBoost – KNIME Community Hub)

Log Info :slight_smile:
2023-12-31 12:41:28,780 : WARN : KNIME-Worker-29-TensorFlow Network Executor 3:17 : : TFUtil : TensorFlow Network Executor : 3:17 : The TensorFlow version of the network “1.15.0” is newer than the runtime TensorFlow version “1.13.1”.
This could lead to unexpected behaviour.
If the network has been created by the Python Network Creator or the TensorFlow Converter this could mean that your Python TensorFlow version is to new.
2023-12-31 12:41:28,911 : ERROR : KNIME-Worker-29-TensorFlow Network Executor 3:17 : : DLAbstractExecutorNodeModel : TensorFlow Network Executor : 3:17 : Op type not registered ‘FusedBatchNormV3’ in binary running on MARK-PC. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2023-12-31 12:41:28,912 : ERROR : KNIME-Worker-29-TensorFlow Network Executor 3:17 : : Node : TensorFlow Network Executor : 3:17 : Execute failed: An error occured during execution of the deep learning network. See log for details.
java.lang.RuntimeException: An error occured during execution of the deep learning network. See log for details.
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.handleGeneralException(DLAbstractExecutorNodeModel.java:680)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.executeInternal(DLAbstractExecutorNodeModel.java:562)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.execute(DLAbstractExecutorNodeModel.java:294)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:588)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1297)
at org.knime.core.node.Node.execute(Node.java:1059)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:595)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:98)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:367)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:221)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: org.tensorflow.TensorFlowException: Op type not registered ‘FusedBatchNormV3’ in binary running on MARK-PC. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
at org.tensorflow.SavedModelBundle.load(Native Method)
at org.tensorflow.SavedModelBundle.access$000(SavedModelBundle.java:27)
at org.tensorflow.SavedModelBundle$Loader.load(SavedModelBundle.java:32)
at org.knime.dl.tensorflow.savedmodel.core.execution.TFSavedModelNetworkExecutionSession.executeInternal(TFSavedModelNetworkExecutionSession.java:152)
at org.knime.dl.core.execution.DLAbstractNetworkExecutionSession.run(DLAbstractNetworkExecutionSession.java:162)
at org.knime.dl.base.nodes.executor2.DLAbstractExecutorNodeModel.executeInternal(DLAbstractExecutorNodeModel.java:557)
… 14 more

As a non Data Science person I am sure I am missing something obvious,

Thanks Again, Mark

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.