Conda Environment Propagation do not work on server




Hi everyone,
I am writing a post because using the advice received from other users in the forum, who have asked similar questions, I cannot get the workflow to work in the server made available by the university.

Running the workflow in the image, on a pc with environment automatically installed by knime analytical platform taking advantage of the appropriate section in the settings works without any problem.

we used the Environment Conda Propagation node to properly copy all the necessary libraries from the pc and be able to transport them to the server.

running the workflow in the server, we get three types of errors (even without making changes of nodes or to the existing nodes):

1) OpenSSL requested and not found.
[happened only once].
2) library pyarrow missing, even though it was present and checked in the environment node.
3) server does not advance but after more than an hour stopped loading the environment node, ends the process.
[I attach images related to this error].

Note:
the server was provided by knime for the university challenge related to the prediction of Diabetes cases, I can provide the server address (obviously only to people from knime) to check the actual execution of the workflow.

Thank you very much in advance
Christian Persico

Hi @christianpersi ,

Thank you for your detailed explanation of what is going on. If you wouldn’t mind emailing us at support@knime.com with this issue, as well as providing the server logs so that we can investigate further.

Thanks,
Zack

Hi Zack,

i’m sorry but before emailing to support, i need to ask you where i find the server Logs, i couldn’t find them

thanks in advance
Christian

Hi Christian, I would like to help you but (based on the task assigned by the challenge we agreed with your uni) it is clear from the screenshot that what you are doing is wrong.

We gave you access to this community knime server to deploy the trained model scoring not to deploy the training of the model.

Please train locally.

Then export the model with Keras Network Writer – KNIME Community Hub then open a completely new workflow and import again with a Keras Network Reader – KNIME Community Hub and drag in more nodes to prepare the raw data before applying a Keras Network Executor – KNIME Community Hub with container nodes for your API or widget and view nodes for your data app (not sure what kind of deployment you are aiming for based your challenge).

After you are done please post the new workflow screenshot in here and if it looks good i can help you to deploy it on the community server (making sure the right conda enviroment can be installed on first execution or using one of the pre-installed enviroments already there).

Best of luck with your university challenge
Paolo

2 Likes

Hi Paolo, we know that what we uploaded isn’t the deployment, we just upload one of the workflows that uses the same libraries of the deployment to try to solve the environment problem, sorry for the misconception.

Here is the screenshot of the work in progress new data-app:

As you suggested we made a little change in the conda porpagation node as in picture:

This are the errors on the server:


Thank you in advance for the patience

Hi there,
Yep it looks like you are building the data app that is only deploying the model not also training, much better now!

It looks like that conda (the external technlogy running behind the conda node) is not able to install (in time or at all) such a complex env. In this cases the KNIME Server admin can help by accessing the command line and manually install the enviroment for you. Given the many community users we are helping this could take quite some time to take care.

When evaluating your submission I can put a good word for you as we understand this is a major blocker that is not in your control. Regardless of this there is one thing we could try:

On this server there is already a deep learning environment currently used by another deep learning data app. Maybe it will work also for your Keras Executor nodes. This would be true unless you have more Python Script nodes with custom packages in other parts of the data app…

Let’s give it a shot. I am attaching here the yml file of such enviroment.

py36_knime_dl_cpu.txt (4.2 KB)

I uploaded it in .txt format instead of .yml otherwise the forum would not let me upload it. Please use this file to install locally such enviroment.

conda env create -f py36_knime_dl_cpu.yml

Then go in your conda node settings and try the following:

2023-03-26_19h07_50

If the workflow executes locally like this without errors, you can save it and move it to the server and it should find the same enviroment also there.

Please don’t train deep learning models on the community server, not even for testing, as it is resource expensive. Just deploy the data app with only keras executor nodes.

P.S. Why not dynamically change the input of the model from the data app to make things more interactive?

2 Likes

Problem solved, with minor changes along the workflow chain, after following the step you mentioned with the pre-installed environment everything works fine!

Thank you

Team 4

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.