AutoML - All Models Post Process Component Failing

@artem. Two more issues that I have been dealing with using autoML for ML work (Knime 4.7 on Mac M1 pro OS Monterey):

  1. Missing XGBoost nodes . I noticed it when running AutoML and also when running a separate XGBoost workflow. I already installed and uninstalled the XGBoost extension and the XGBoost nodes still do not appear in the node repository and the workflow shows the nodes as missing. Maybe the extensions are installed but the node repository and Knime is unable to see XGBoost nodes?

  1. Looking at the forum, I have seen this has been an ongoing issue but am having a hard time setting up Deep Learning for Keras and Tensorflow because of old Python build requirements that I cannot find :frowning_face: : . Hope it gets resolved soon.

Thank you,

It looks like a bug to me. But there a couple of things you can try:

  • Try to drag (copy) the component from the hub once you have the XGBoost extension installed. Maybe you saved the workflow in this state, where the node is missing;
  • Try to find the this node in Node repository and try to replace the missing nodes. However it also means that you need to configure them yourself as well.

Just in case I will tag @paolotamag since it seems that he is investigating the issues with the AutoML component. Maybe he will have some ideas.

Hi all,

in general i recommend dragging and dropping the automl component again after installing extensions and restarting KNIME.

Regarding the KNIME XGBoost Integration…
We are investigating issues in even installing the extension at this moment in Mac systems. We will post here when we know more. A workaround for autoML is executing the AutoML component even with the missing nodes indies but with XGBoost deactivated in the component dialogue. Maybe you can try the same?

Regarding Deep Learning enviroments, you should be able to install the enviroment automatically if you go in “KNIME Preferences > KNIME > Python Deep Learning” then you can create a new enviroment for either Keras or “TensorFlow 2” automatically by selecting “Conda”.

You need to install first conda though via miniconda (Miniconda — conda documentation) or similar. If you defined a custom path for the miniconda installation provide such path in “KNIME Preferences > KNIME > Conda”


Regarding KNIME XGBoost Integration – KNIME Community Hub

I talked to our devs and it seems that XGBoost nodes are missing on M1 Mac in KNIME 4.7.x
XGBoost on Mac on 5.1.0 should work but there AutoML is failing.

To use XGBoost in the AutoML component on M1 Mac please wait for 5.1.1 (likely later this month).

Sorry for the inconvenience.


No worries. I really appreciate the work you guys do answering questions from non-coders like me. Thank you very much!

I will watch out for 5.1.1.

I have had success using autoML on Mac M1 and Knime 4.7 running the ML workflows except for XGBoost and DL (w/ Keras).

On deep learning, I will try the miniconda install. Thank you!

Still no luck. I followed the instructions and got the error below:

I also tried to make my own yml file following this very thorough articles: KNIME and Python — Setting up and managing Conda environments | by Markus Lauber | Low Code for Data Science | Medium
KNIME and Python — Setting up Deep Learning Environments for Keras and TensorFlow | by Markus Lauber | Low Code for Data Science | Medium
Also getting the similar error messages after “create new environment” step. I wa sable to download Python 3.6 from the Python website but I do not know how to direct the yml to fetch it locally. Plus the other packages needed are also hard to find. Versions loaded after making conda environment based on knime_py_keras_macos.yml file from the article are newest versions. When I force it to try to load older versions using “=” , env creation fails.

I hope what I wrote makes sense. lol.


Sorry to hear.

Did you check the blog post:

scroll down to the section 4. Set Up your Python Deep Learning Environment

or check the documentation:

P.S. the knime website is currently under maintenance if any page does not load now please try again later :pray:

1 Like

it just for references,
here are the details of my knime 4.7.5 installation.
i’m using the following conda environment setting to use the DL approach:

a. conda version: 23.5.0
b. keras DL
-python 3.5.4
-keras 2.2.2
-tensorflow 1.10.0
-numpy 1.14.2
-pandas 0.20.3
-h5py 2.8.0

c. tensorflow 2 DL
-python 3.10.11


1 Like

I also noticed the problem of installing the XGBoost extension on Mac with this workflow (Binary Classification - use Python XGBoost package and other nodes to build model and deploy that thru KNIME Python nodes – KNIME Community Hub) in June and there is a ticket already in the knime system AP-20587.

There have been similar problems in the past AP-19894, AP-19918. I hope an underlying cause can be found and resolved.

1 Like

This is very helpful. Thank you. I will definitely try to get Tensorflow 2 working since it works on a newer Python version.

Can I get your opinion or help (if you think it will work) on a workaround on the errors that pop up by either of the two published choices for setting up Keras deep learning on knime, which are: (1) “setup new environment” on Knime DL pop up - which leads to error I get on the screenshot above and (2) making a yml file that has the packages because the sites that the yml tries to get the packages only has the newest versions of the packages. Here, the error I get is that it wants me to remove the old packages or it downloads newer versions of the packages for python, tensorflow, etc.

I was able to find Python 3.6 on the official Python website. I was able to download it but my problem is how to direct Knime to see this older version of Python that I have installed. For example, if I write a new yml environment file, is there a way or command line to direct the yml file to fetch python from a local install (instead of online sites like anaconda, etc.). Or do you guys know of an online site from where I can direct my yml file to fetch Python 3.6 and the older packages of tensorflow, h5py, etc. that running Keras on Knime 4.7 is dependent on?

Thanks, Richard

@rdlc1 I think the issue is, that the current combination of packages for the KNIME nodes and Deep Learning on Apple Silicon is not there. The packages currently required are older and for M1/m2 only newer versions are available.

I was not able to find a working combination of slightly newer packages to get all functions to work with Apple Silicon. I think KNIME would have to rework the nodes to fit to the newer versions of Keras and Tensorflow.

My impression is that is also due to quite dynamic developments in these packages and changes in the way some basic structures are used and communicated and where the nodes might have to be adapted.

Only solution I would see is to just use a Python node and do all the coding by hand - which sort of defeats the idea of low code unfortunately.

Also the installation descriptions might need some updates since there seem to be several cross-references that are not always on point. If the automatic installation is working it is OK - otherwise you have to search for yourself (hence the article).

As mentioned: You can see all the YAML files and settings by extracting them from the KNIME installation :slight_smile: - you could try and follow the versions and packages but I do not think this is currently possible for Apple Silicon (or I was too stupid doing it).

1 Like

@mlauber71 Thank you for the thorough reply. Looks like I will have to wait for the updates for using DL on Apple silicon. Using mostly ML for now so…getting XGBoost to work on Apple M1would suffice. :wink:

Also, I have been using the yml workaround on an Apple M1. So if I use the yml file on my windows 10 (64bit) laptop, then theoretically the yml will be able to fetch python 3.6 and the other older package versions?


1 Like

the easiest and preferred way to handle the ‘pop-up’ dependencies of knime 4.7.x, especially in python DL, is to use conda installation.
i’m using a linux machine and am not quite familiar and expert with other operating systems. although i can install conda env within the knime dialog page, i prefer to do it manually.


the concept of conda installation is the same on all operating systems: create an environment, activate it, and install the required packages. once conda is installed, use the preferences to select the required environment.

for TensorFlow 2 DL, a python 3.10+ environment will be able to fulfill the dependencies. however, for Keras DL, I set the environment to Python 3.5.


It should work on windows. I also have two meta nodes with Conda Environment Propagation for Keras and TensorFlow for Windows. I have not tested them for some time though.

1 Like

Thanks for the suggestions and appreciate all the help!

What finally worked for me was running v4.7 on a windows laptop. Knime was able to pull up the correct python environments for keras and tensorflow. :slight_smile:

1 Like

Hello @mlauber71 - would you happen to have the similar metanode for env propagation for GPU execution of Keras. My CPU Keras execution works just fine (testing done using the Image_Classification_MNIST_Solution workflow). When I switch to GPU Keras execution, the analysis hangs during 1st second of GPU usage.
Keras GPU python env modules installed by Knime is below:

cmd: C:\Users..\miniforge3\Scripts\ create --file C:\Users..\Desktop\knime_4.7.7\plugins\org.knime.python2.envconfigs_4.7.0.v202211181342\envconfigs\windows\py36_knime_dl_gpu.yml --name py3_knime_kerasv2_gpu --json

conda version: 23.3.1


update specs: [‘numpy=1.16.1’, ‘h5py=2.8’, ‘ipython=7.1’, “pyarrow[version=‘>=5.0.0’]”, ‘matplotlib=3.0’, ‘pillow=5.3’, ‘pandas=0.23’, ‘python=3.6’, ‘jedi=0.13’, ‘tensorflow-gpu=1.12.0’, ‘keras=2.2.4’, ‘pip’, ‘jpype1=0.6.3’, ‘cairo=1.14’, ‘python-dateutil=2.7’, ‘py4j’, ‘scipy=1.1’, ‘nbformat=4.4’]

and error I am seeing in the log is:
2023-09-10 18:22:05,558 : ERROR : KNIME-Worker-46-Keras Network Learner 3:96 : : DLKerasLearnerNodeModel : Keras Network Learner : 3:96 : Blas GEMM launch failed : a.shape=(200, 100), b.shape=(100, 10), m=200, n=10, k=100
[[{{node output_1/MatMul}} = MatMul[T=DT_FLOAT, _class=[“loc:@training/Adadelta/gradients/output_1/MatMul_grad/MatMul”], transpose_a=false, transpose_b=false, _device=“/job:localhost/replica:0/task:0/device:GPU:0”](dense_1/Relu, output_1/kernel/read)]]
[[{{node loss/mul/_97}} = _Recvclient_terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”, send_device=“/job:localhost/replica:0/task:0/device:GPU:0”, send_device_incarnation=1, tensor_name=“edge_605_loss/mul”, tensor_type=DT_FLOAT, _device=“/job:localhost/replica:0/task:0/device:CPU:0”]]
2023-09-10 18:22:05,558 : ERROR : KNIME-Worker-46-Keras Network Learner 3:96 : : Node : Keras Network Learner : 3:96 : Execute failed: An error occured during training of the Keras deep learning network. See log for details.
java.lang.RuntimeException: An error occured during training of the Keras deep learning network. See log for details.
at org.knime.dl.keras.base.nodes.learner.DLKerasLearnerNodeModel.handleGeneralException(
at org.knime.dl.keras.base.nodes.learner.DLKerasLearnerNodeModel.executeInternal(
at org.knime.dl.keras.base.nodes.learner.DLKerasLearnerNodeModel.execute(
at org.knime.core.node.NodeModel.executeModel(
at org.knime.core.node.Node.invokeFullyNodeModelExecute(
at org.knime.core.node.Node.execute(
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(
at org.knime.core.util.ThreadUtils$
at java.base/java.util.concurrent.Executors$ Source)
at java.base/ Source)
at org.knime.core.util.ThreadPool$
at org.knime.core.util.ThreadPool$
Caused by: org.knime.python2.kernel.PythonIOException: Blas GEMM launch failed : a.shape=(200, 100), b.shape=(100, 10), m=200, n=10, k=100
[[{{node output_1/MatMul}} = MatMul[T=DT_FLOAT, _class=[“loc:@training/Adadelta/gradients/output_1/MatMul_grad/MatMul”], transpose_a=false, transpose_b=false, _device=“/job:localhost/replica:0/task:0/device:GPU:0”](dense_1/Relu, output_1/kernel/read)]]
[[{{node loss/mul/_97}} = _Recvclient_terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”, send_device=“/job:localhost/replica:0/task:0/device:GPU:0”, send_device_incarnation=1, tensor_name=“edge_605_loss/mul”, tensor_type=DT_FLOAT, _device=“/job:localhost/replica:0/task:0/device:CPU:0”]]
at org.knime.python2.kernel.messaging.AbstractTaskHandler.handleFailureMessage(
at org.knime.python2.kernel.messaging.AbstractTaskHandler.handle(
at org.knime.dl.python.core.DLPythonAbstractCommands$DLTrainingTask.runInternal(
at org.knime.core.util.ThreadUtils$CallableWithContextImpl.callWithContext(
at org.knime.core.util.ThreadUtils$
at java.base/ Source)
at java.base/java.util.concurrent.Executors$ Source)
at java.base/ Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$ Source)
at java.base/ Source)

So I was able to find a solution to get my RTX 40xx to work for Keras DL. Knime version 4.7.7

  1. Created a New GPU Environment.
  2. Opened the Python GPU environment on a command line.
  3. Updated tensorflow to version 1.15 using command:
    pip install tensorflow==1.15 --ignore-installed
  4. Installed tensorflow-directml using command:
    pip install tensorflow-directml
  5. Tested that package runs correctly in Python session as described in TensorFlow with DirectML on Windows | Microsoft Learn
  6. Run the GPU environment in KNIME using test workflow (Image_Classification_MNIST_Solution)
  7. Success!!!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

The AutoML components (both classification and regression) should work now as expected.

Make sure to update your KNIME AP to the version “KNIME Analytics Platform v5.1.1.v202309110912” or later versions.

After updating the Model to Cell nodes inside should not fail anymore.

Big thanks to @hornm for fixing the issue! :yellow_heart: