issues with using request library/module in python script node

Hello and help.

I am having difficulty in trying to send image URI’s to a locally hosted Tensorflow model. I’m getting this error in the python script node.

[No module named ‘requests’
Traceback (most recent call last):
File “”, line 1, in
ModuleNotFoundError: No module named ‘requests’]

I installed the request package into the conda env. i’m using, and under the preferences>python - manual config. everything looks normal.

Code if that helps.

import base64
import requests

Save string of image file path below
img_filepath =input_table_1[‘Path’]

Create base64 encoded string
with open(img_filepath, “rb”) as f:
image_string = base64.b64encode(f.read()).decode(“utf-8”)

Get response from POST request
response = requests.post(
url=“http://localhost:38101/v1/predict/08aed24a-5d94-40fb-b537-53d7e3d42acf”,
json={“image”: image_string},
)
data = response.json()
top_prediction = data[“predictions”][0]

Print the top predicted label and its confidence
print(“predicted label:\t{}\nconfidence:\t\t{}”
.format(top_prediction[“label”], top_prediction[“confidence”]))

If possible is there an alternative for something like this? I tried using the POST request system and that returns “missing values”, I am quite out of my depth here so any advice helps.

Thanks

Hi @duper123,

if the POST Request node only provides a missing value, you could either add Extract Missing Value Cause – KNIME Hub node. This might provide additional information on what went wrong with your request. Alternatively you could change log level to DEBUG and check the logs for further information. But in general, using POST Request node for this task is a reasonable choice.

1 Like

Thank you!
That helped me figure out that the issue was I need to code images as base64 before sending them out, now onto figuring how to do that.

I only wish there was an easier way to run images through a newer version tensorflow classification model built outside of KNIME.

Would it be an option to read that tensorflow model into KNIME and do the scoring there? Or do you rely on using the external hosted scoring service?

I built the model using microsoft’s LOBE api and have tried importing the model into KNIME but get this error: WARN TensorFlow Network Reader 4:61 The TensorFlow version of the network “1.15.4” is newer than the runtime TensorFlow version “1.13.1”. This could lead to unexpected behaviour,

After snooping around I learned the lobe team plans on being able to export models as tensorflow2 in a future update, so figured I would wait until then for fully integrating my workflow if I can’t get the request system to work.

Currently I use KNIME for image preprocessing, then write the files to a folder (this takes a long time), then run an external tool to run the model on the folder and produce a prediction csv, then back to KNIME to score and filter the data.

External tool link. GitHub - lobe/image-tools: Tools for creating image-based datasets for machine learning

If you have any other suggestions or workarounds I am all ears.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.