Export model for serving with tensorflow-serving

Hallo,
I am challenging the export of a tensorflow model for serving it with tensorflow-serving.
After training the model with “Keras Network Learner”, the model is converted with “Keras to TensorFlow Network Converter” and finally writted to disk using “TensorFlow Network Writer”.

The chosen name is in this case “model-tf.zip”. After unzipping the archive “model-tf.zip” it contains a folder named “2d667448-fc0f-446b-b20e-d3bef4214ed3”. In this folder one can find the file “saved_model.pb” as well the folder “variables”.

For the application or deployment of the model the usual way is to use TensorFlows serving option via Docker: https://github.com/tensorflow/serving

After installing and testing Docker, it is described in the TensorFlow documentation to load the previously trained model by means of this command in my case in a Ubuntu machine:

docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    tensorflow/serving &

regarding the naming of my model this results in:

sudo docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/model-tf:/models/model-tf" \
    -e MODEL_NAME=model-tf \
    tensorflow/serving &

After execution of the command in terminal it throws some errors:

sudo docker run -t --rm -p 8500:8500 -v "$TESTDATA/model-tf:/models/model-tf" -e MODEL_NAME=model-tf tensorflow/serving

2020-01-21 08:40:25.042162: I tensorflow_serving/model_servers/server.cc:86] Building single TensorFlow model file config:  model_name: model-tf model_base_path: /models/model-tf
2020-01-21 08:40:25.042310: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2020-01-21 08:40:25.042339: I tensorflow_serving/model_servers/server_core.cc:573]  (Re-)adding model: model-tf
2020-01-21 08:40:25.142767: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: model-tf version: 1}
2020-01-21 08:40:25.142809: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: model-tf version: 1}
2020-01-21 08:40:25.142819: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: model-tf version: 1}
2020-01-21 08:40:25.142835: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/model-tf/1
2020-01-21 08:40:25.144402: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-01-21 08:40:25.144847: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:333] SavedModel load for tags { serve }; Status: fail: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`. Took 2008 microseconds.
2020-01-21 08:40:25.144912: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: model-tf version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`

Analyzing the model with the recommended “saved_model_cli” give the following result:

saved_model_cli show --dir /home/mat/serving/tensorflow_serving/servables/tensorflow/testdata/model-tf/1
The given SavedModel contains the following tag-sets:
knime

I guess due to the defined tag-set, which is because of the export “knime” I cannot load the model for deployment/ testing.
How can one change the tag-set in Knime or do I have to change anything on side of TensorFlow?

Any help appreciated.
Best regards

An additional information to the output of “saved_model_cli”:

saved_model_cli show --dir /home/mat/serving/tensorflow_serving/servables/tensorflow/testdata/model-tf/1 --all

The result is:

MetaGraphDef with tag-set: 'knime' contains the following SignatureDefs:

signature_def['serve']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['conv2d_1_input:0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 136, 136, 3)
        name: conv2d_1_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_1/Sigmoid:0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: dense_1/Sigmoid:0
  Method name is: tensorflow/serving/predict

Hi @tmasiak,

Sorry for the trouble. The “TensorFlow Network Writer” node should have an option to provide the tags (or use the serving tag by default). I will open a ticket for that.

However, there is a hack that you can use to make it work right now. You can add a “DL Python Network Editor” node before the writer with the following code:

import tensorflow as tf
from TFModel import TFModel

output_network = TFModel(input_network.inputs,
                         input_network.outputs,
                         input_network.graph,
                         input_network.session,
                         tags=[tf.saved_model.tag_constants.SERVING],
                         method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME,
                         signature_key=tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY)

The writer will just use the tags, method_name and signature_key defined here. (Note that you can use another method name if you want).

Best,
Benjamin

6 Likes

Thank you @bwilhelm. Works perfectly for me. :grin:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.