Deep Learning Python Environment Mac OS (M2)


I need to use Deep Learning Python in my workflow. The creation of the environment using KNIME directly raises errors.
For a Mac OS running on a M2 chip, a version of Tensorflow >= 2.0.0 is necessary. As in KNIME the maximum version is <2.0.0 is there a solution or a trick for this issue ?

Tried to install TF directly in the Python environment, but the version is not compatible with KNIME (4.7.1).

Thanks for your feedback.

@trj I have not yet found a suitable way to get KNIME DL integration and Python Deep Learning packages (Keras, Tensorflow) up and running on an Apple Silicon machine. The current KNIME DL nodes seem to expect older versions of the Python packages which are not there for M1 and M2 systems.

You can read about my general efforts here:

I also tried to replace several nodes that would want old TF packages with ‘pure’ Python nodes - which kind of is not what a low-code would look like, but did not have the energy to bring that into a consistent approach.

Best way on an Apple Silicon machine currently might be to do it all with Python script nodes.

1 Like

Thanks a lot for your answer.
I’ll try to do it in Python directly.

1 Like

@trj if you have a working example you might share it with the community. I think there are several people out there who would like to have a DL solution with Apple silicon.

Hi Knime,

Found a workaround to use Python and Deep Learning on an Apple Silicon (M1, M2).
For this solution, you’ll need to execute “remotely” a script that will be run directly on a conda virtual environment. Therefore, all the Network architectures and settings are defined in a python script externally from KNIME. Below, an example.

Create a conda environment for Python in KNIME preferences using the built-in configuration page (cf. print screen below).

In section “Python Deep Learning”, let the “Use configuration from the Python preference page” checked.

Configure the “Conda Environment Propagation” node to point on the right desired conda environment containing the Deep Learning libraries

In the “Python Script” node, page “Executable Selection”, select the previously configure python environment, coming from the Propagation node

In the Script page, define the variables to use and execute the script to train and do the inference “remotely” using the following command:

os.command(path/to/conda/env/python path/to/ -setting_name setting)

With this command, the python environment of Knime will execute the using the python of the conda environment defined at the beginning of the command.

This is an efficient workaround deployed in production that is not using the nodes of KNIME. This solution is valid cross-platform as well (developed on a M2 and deployed on a Windows Server). It needs some knowledge in Python as the Model needs to be developed directly in Python.

Feel free to let comment on my solution!



This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.