ERROR [Loadworkflowrunnable] when I ran the Keras Network Learner 4:27 node

I am having this error when I the keras networl learner node in the following work flow:

and I have attached the KNIME log as well
Knime_log_workflow_fail.txt (130.6 KB)

@Haroon2001 from the log it is difficult to determine what the problem might be. I see two things:

  • the configuration of the Python / Deep Learning environment itself
  • the configuration / setup of the data and network if this fits the task

Maybe you could try and run this example with the propagation node for Windows and Keras and see if this does work. Next step would be to check the setup of your data

I plan on releasing an article about KNIME and deep learning setup. To familiarise yourself with the concepts you could start with this one:

1 Like

@Haroon2001 the mentioned article is now online. You might want to take a look tough it might be a bit longer …

Alright so I have Python version 3.6.13 downloaded and I this my conda environment.

_tflow_select 2.1.0 gpu
abseil-cpp 20210324.2 hd77b12b_0
absl-py 0.15.0 pyhd3eb1b0_0
arrow-cpp 5.0.0 py36he9238d2_8_cpu conda-forge
astor 0.8.1 py36haa95532_0
attrs 21.4.0 pyhd3eb1b0_0
aws-c-cal 0.5.11 he19cf47_0 conda-forge
aws-c-common 0.6.2 h2bbff1b_0
aws-c-event-stream 0.2.7 h70e1b0c_13 conda-forge
aws-c-io 0.10.5 h2fe331c_0 conda-forge
aws-checksums 0.1.11 h1e232aa_7 conda-forge
aws-sdk-cpp 1.8.186 hb0612c5_3 conda-forge
backcall 0.2.0 pyhd3eb1b0_0
blas 1.0 mkl
bzip2 1.0.8 he774522_0
c-ares 1.18.1 h2bbff1b_0
ca-certificates 2022.10.11 haa95532_0
cairo 1.14.12 hf171d8a_3
certifi 2021.5.30 py36haa95532_0
colorama 0.4.4 pyhd3eb1b0_0
coverage 5.5 py36h2bbff1b_2
cudatoolkit 9.0 1
cudnn 7.6.5 cuda9.0_0
cycler 0.11.0 pyhd3eb1b0_0
cython 0.29.24 py36hd77b12b_0
dataclasses 0.8 pyh4f3eec9_6
decorator 5.1.1 pyhd3eb1b0_0
freetype 2.10.4 hd328e21_0
gast 0.5.3 pyhd3eb1b0_0
gflags 2.2.2 ha925a31_0
glog 0.5.0 hd77b12b_0
grpc-cpp 1.40.0 h2431d41_1 conda-forge
grpcio 1.36.1 py36hc60d5dd_1
h5py 2.8.0 py36hf7173ca_0
hdf5 1.8.18 vc14h7a021fe_0 [vc14] anaconda
icc_rt 2022.1.0 h6049295_2
icu 58.2 vc14hc45fdbb_0 [vc14] anaconda
importlib-metadata 4.8.1 py36haa95532_0
importlib_metadata 4.8.1 hd3eb1b0_0
intel-openmp 2022.1.0 h59b6b97_3788
ipython 7.1.1 py36h39e3cac_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
jedi 0.13.3 py36_0
jpeg 9b vc14h4d7706e_1 [vc14] anaconda
jpype1 0.6.3 py36h79cbd7a_1001 conda-forge
jsonschema 3.2.0 pyhd3eb1b0_2
jupyter_core 4.8.1 py36haa95532_0
keras 2.2.4 0
keras-applications 1.0.8 py_1
keras-base 2.2.4 py36_0
keras-preprocessing 1.1.2 pyhd3eb1b0_0
kiwisolver 1.3.1 py36hd77b12b_0
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libcurl 7.82.0 h86230a5_0
libdeflate 1.8 h2bbff1b_5
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.18.0 h7755175_1 conda-forge
libssh2 1.10.0 hcd4344a_0
libthrift 0.15.0 he1d8c1a_0
libtiff 4.4.0 h8a3f274_2
libutf8proc 2.6.1 h2bbff1b_0
lz4-c 1.9.3 h2bbff1b_1
markdown 3.3.4 py36haa95532_0
matplotlib 3.0.3 py36hc8f65d3_0
mkl 2020.2 256
mkl-service 2.3.0 py36h196d8e1_0
mkl_fft 1.2.0 py36h45dec08_0
mkl_random 1.1.1 py36h47e9c7a_0
nbformat 4.4.0 py36_0
numpy 1.16.1 py36h19fb1c0_1
numpy-base 1.16.1 py36hc3f5095_1
olefile 0.46 py36_0
onnx 1.4.1 pypi_0 pypi
onnx-tf 1.2.1 pypi_0 pypi
openssl 1.1.1s h2bbff1b_0
pandas 0.23.4 py36h830ac7b_0
parquet-cpp 1.5.1 h34088ae_4
parso 0.8.3 pyhd3eb1b0_0
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 5.3.0 py36hdc69c19_0
pip 21.2.2 py36haa95532_0
pixman 0.34.0 vc14h00fde18_1 [vc14] anaconda
prompt_toolkit 2.0.10 py_0
protobuf 3.18.0 py36he2d232f_0 conda-forge
py4j 0.10.9.2 pyhd3eb1b0_0
pyarrow 5.0.0 py36h6720e24_8_cpu conda-forge
pygments 2.11.2 pyhd3eb1b0_0
pyparsing 3.0.4 pyhd3eb1b0_0
pyqt 5.9.2 py36h6538335_2
pyrsistent 0.17.3 py36he774522_0
python 3.6.13 h3758d61_0
python-dateutil 2.7.5 py36_0
python_abi 3.6 2_cp36m conda-forge
pytz 2021.3 pyhd3eb1b0_0
pywin32 228 py36hbaba5e8_1
pyyaml 5.3.1 py36he774522_0
qt 5.9.7 vc14h73c81de_0 [vc14] anaconda
re2 2021.09.01 h0e60522_0 conda-forge
scipy 1.1.0 py36h29ff71c_2
setuptools 58.0.4 py36haa95532_0
sip 4.19.8 py36h6538335_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 h6c2663c_0
sqlite 3.40.0 h2bbff1b_0
tensorboard 1.12.2 py36h33f27b4_0
tensorflow 1.12.0 gpu_py36ha5f9131_0
tensorflow-base 1.12.0 gpu_py36h6e53903_0
tensorflow-gpu 1.12.0 h0d30ee6_0
termcolor 1.1.0 py36haa95532_1
tk 8.6.12 h2bbff1b_0
tornado 6.1 py36h2bbff1b_0
traitlets 4.3.3 py36haa95532_0
typing 3.7.4.3 pypi_0 pypi
typing_extensions 4.1.1 pyh06a4308_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wcwidth 0.2.5 pyhd3eb1b0_0
werkzeug 2.0.3 pyhd3eb1b0_0
wheel 0.37.1 pyhd3eb1b0_0
wincertstore 0.2 py36h7fe50ca_0
xz 5.2.6 h8cc25b3_0
yaml 0.1.7 vc14h4cb57cf_1 [vc14] anaconda
zipp 3.6.0 pyhd3eb1b0_0
zlib 1.2.11 vc14h1cdd9ab_1 [vc14] anaconda
zstd 1.5.0 h19a0ad4_1

I would like you could help me figure what the issue is. I have read your article and I don’t think its a dependency issue. When I am running the workflow I am getting these warnings and errors:

WARN Keras Network Learner 3:27 The number of rows of the input validation data table (99) is not a multiple of the selected validation batch size (10). Thus, the last batch of each validation phase will continue at the beginning of the validation data table after reaching its end. You can avoid that by adjusting the number of rows of the table or the batch size if desired.
WARN Keras Network Learner 3:27 C:\Users\user\anaconda3\envs\py3_knime_dl\lib\site-packages\keras\engine\saving.py:292: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
ERROR Keras Network Learner 3:27 java.lang.Exception: Failed to receive message from Python or forward received message. Cause: Connection reset
ERROR Keras Network Learner 3:27 Execute failed: An error occured during training of the Keras deep learning network. See log for details.

I will also attached my log
Keras_Network_error_5_12_22.txt (310.9 KB)

@Haroon2001 two things. First can you try to make this example work (download the whole WF group) with the settings for windows:

Can you share your example or point to the hub. In the example you will note that the number of initial values is benign dynamically allocated. You might read about configurations in the resources mentioned at the end of my article (eg the book by KNIME: Codeless Deep Learning with KNIME Book | KNIME).