I’m trying to use some of the saved models from Tensorflow into Keras nodes. I have the below files:
it gives me an error while loading this into Python Network reader or Keras Network reader. Is there a simple way of reading this model into KNIME without re-running the model again.
Appreciate any help.
the TensorFlow Network Reader is the only node that can read saved models.
While it is easily possible to convert a Keras network into a TensorFlow model (via our Keras to TensorFlow Network Converter), the opposite is not possible.
Note that you can use TensorFlow networks in the DL Network Executor but it is currently not possible to train them.
@nemad Guessed so. My best bet was using the Tensorflow Network Reader. From the files I have, I see that assets and assests.extra is missing
Is there a way I can generate these from my existing .pb graph file ?
@mohammedayub I don’t really understand your question.
Do you mean the files displayed in your original post?
Those are indeed not in the standard TensorFlow SavedModel format so there might be some trouble when reading them.
If possible you should use the TensorFlow’s SavedModelBuilder in Python to save your models. For simple models tf.saved_model.simple_save should work as well.
Regarding the assets folders: We did not encounter any models yet that have those.
From the TensorFlow documentation it reads like the assets might be handled by TensorFlow so it would be interesting to see if that is really the case.
@nemad sorry for the confusion. Yes, I meant the files displayed on my original post. To reiterate, I have used this Tensorflow repo - https://github.com/tensorflow/nmt to generate various models. The output of using this repo contains only the files I have in my original post and are not in the SavedModel format. hence I’m trying to use the existing file and re-save them as SavedModel Instance.
For simple_save to work, I was facing some difficulty to get the inputs and outputs of the graph from the existing graph file (saved_model.pb). Any idea how to get these from existing .pb or .meta file ?
Just as update I was able to save the Tensorflow model as SavedModelBuilder Instance. Although I could not understand how to create a Signature Definition Map for this language model.
When importing it gives me drop down for the selecting Inputs and Outputs in the ‘Advanced Tab’.
Here is the original Tensorflow model zip file for reference. Tensor flow Model
Need help figuring out the Inputs and Outputs, not sure if the Tensorflow Visualization would help.
I mange to load the TFmodel into KNIME. I could not find the “seq2seq/index_to_string_Lookup” tensor in the output drop down option, any reason why ?
My input/Output Signature configuration is as below:
And configured in KNIME as below:
Also, When I run the model with this configuration it gives version error saying -
2018-10-15 15:54:08,782 : WARN : KNIME-Worker-94 : TFUtil : DL Network Executor : 4:13 : The TensorFlow version of the network "1.12.0" is newer than the runtime TensorFlow version "1.8.0". This could lead to unexpected behaviour. If the network has been created by the Python Network Creator or the TensorFlow Converter this could mean that your Python TensorFlow version is to new. 2018-10-15 15:54:08,868 : DEBUG : KNIME-Worker-94 : Buffer : DL Network Executor : 4:13 : Using table format org.knime.core.data.container.DefaultTableStoreFormat 2018-10-15 15:54:08,868 : ERROR : KNIME-Worker-94 : DLExecutorNodeModel : DL Network Executor : 4:13 : Op type not registered 'GatherTree' in binary running on KASERVER. Make sure the Op and Kernel are registered in the binary running in this process. 2018-10-15 15:54:08,869 : DEBUG : KNIME-Worker-94 : Node : DL Network Executor : 4:13 : reset
Appreciate any help!
I have no idea why the
seq2seq/index_to_string_Lookup tensor didn’t appear in the drop-down. If you could provide an example model where this issue happens I could try to debug it. But it should be possible to just select the saved signature and not use the advanced settings now that the signature is saved in the SavedModel.
The error appears because the java binary doesn’t contain a TensorFlow op that is part of the graph. This can happen if an op was introduced in a later version than the java TensorFlow version used by KNIME (but I am not super sure that this is the case for you).
You have two options to tackle this issue:
- Use the TensorFlow (Python) execution backend. This will use TensorFlow from your python installation where the op is available. There will be a performance hit because the data has to be sent to a Python process but it is easy and fast to try this.
- Use another TensorFlow version (1.8) in python to train and save the graph. If you use the op which isn’t available directly, your Python code won’t work anymore and you would need to build around it. If the op is used by some higher level function, the same function might be implemented in a different way (not using the op) and you are fine.
I’m glad to see that you are using the TensorFlow integration and throw a real-world use case on it.
Attached Link is this simple model I’m using:
Meantime I’m running the model again with Tensorflow 1.8 and try to export again.
I tried building and exporting with an environment in Tensorflow 1.8.0
Other packages I’m using to run training is OpenNMT-tf 1.10.1
I did not get the version error but I got the op error on ‘GatherTree’ (attached KNIME log file)
To get the list of input and output tensors we parse the protobuf graph definition and I realized that for the
LookupTableFindV2 op (type of the
seq2seq/index_to_string_Lookup tensor) the input and output type is defined differently in the node definition which is not yet supported by the parser. Therefore it doesn’t appear in the drop-down. This is a bug and I will try to solve it soon.
Nonetheless, you can just use the saved signature which is selected by default and don’t need to activate the advanced settings. (The saved signature is much easier for us to parse and it works).
Sorry to hear that using TensorFlow 1.8 didn’t work and after some additional research I found out why:
The GatherTree op is part of the
tf.contrib package which is not part of the stable API and therefore not distributed with the Java package. One could load it additionally with a shared library (Source) but we don’t support this in KNIME (@christian.dietz @nemad we could think if we want to add this feature because it could be useful to many). If this could be fixed the same issue will likely happen for ops defined by OpenNMT (I think they also define custom ops).
Running the model via the “TensorFlow Python” executor is also a problem because there are inputs of type String which we can’t send to Python right now. @nemad do you know about the status of this?
Sorry that I couldn’t give you a solution or helpful direction but it seems like you are hitting the limits of our TensorFlow Integration (Which is a good thing for us because we can now think about how we tackle this)
Thanks @bwilhelm for that detailed analysis. At least we know what the problem is.
I took this route as I thought using custom build Seq2Seq models using existing repositories (like OpenNMT, Tensoflow Seq2Seq etc.) and then import into KNIME for building the inference pipelines (serving part) would be an easier route to go. There still seems to be a lot of versioning and inter-platform model deployment issues that need to be solved not just by you guys but the NN players in general. So I don’t blame you guys, you have done a great job with the Keras integration already.
For now, I will just try to replicate the Keras network from scratch using available options and compare the performances of the model.
Naive suggestion from my end after struggling for a while on this would be some out of box examples workflows or meta nodes samples as part of Neural network sections on -
Appreciate you active help on this.
Some update on this: string inputs will be supported by the Python-based executors of our deep learning integration in the next dot-zero release of KNIME Analytics Platform.
We’ll also keep you posted on any progress that is related to what has been discussed in this forum thread so far. Thanks for all of your suggestions, @mohammedayub ! Especially more visualizations and “ready-to-use” NNs are certainly something we’d like to tackle with future releases.
Thanks @MarcelW for the update. Looking forward to try them out.