- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi community, I'm fairly new to OpenVino (OV) and Intel Neural Compute Stick (NCS)
I'm trying to get the simplest possible custom example working on the Intel NCS (not the NCS2).
I want:
- To create a net, train and save the model and weights with TensorFlow (TF)
- Convert it to an Intermediate Representation (IR) using Model Optimizer (MO)
- Implement it in the hardware NCS to test the performance.
For (1) I'm following this guide: https://www.tensorflow.org/tutorials/keras/save_and_load
It saves the model and the .h5 file. For the dummy example I don't care, in my project I have an external .h5 file so I will need to learn to manage those.
For (2) I'm following this guide: https://docs.openvino.ai/2020.3/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html
It seems that the model needs to be frozen (https://docs.openvino.ai/2020.3/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model)
but I'm not able to integrate it in the previous example (tensorflow session, name_of_the_output_node... ¿? too many things not explained)
When done without freezing this command:
python3 mo_tf.py --input_model <INPUT_MODEL>.pb
got this error
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Adam/dense_11/bias/v/Read/ReadVariableOp.
Original exception message: 'ascii' codec can't decode byte 0xcc in position 1: ordinal not in range(128)
Details of my installation:
- OS Windows 10
- OpenVino toolkit version 2020.3
- Python 3.6.5
- TensorFlow 2.4.1
Any help would be appreciated.
Thanks in advance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Just a reminder, With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
In addition to that, the OpenVINO 2020.3 only supports up to Tensorflow 2.2.0. Issues are expected if a newer version is being used.
You may refer to this release note for more detailed information: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes-2020.html
There are 3 ways to store and load non-frozen models to the MO:
- Checkpoint
- MetaGraph
- SavedModel format
You may refer to the detailed steps in this documentation: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-non-frozen-models-to-the-model-optimizer
For a custom model (especially when a network is defined in the code), you have to create an inference graph file and, to be able to use this file with OpenVINO Model Optimizer, it needs to be frozen. (view the freezing custom model section)
You may use this code to freeze the model and dump it to a file:
import tensorflow as tf
from tensorflow.python.framework import graph_io
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
Probably you could review what the supported Tensorflow model has, as a reference to make yours work.
This model is one of the supported ResNext: https://github.com/taki0112/ResNeXt-Tensorflow
You could find more repos from the Supported Topology section in the documentation I had provided above.
Sincerely,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Just a reminder, With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
In addition to that, the OpenVINO 2020.3 only supports up to Tensorflow 2.2.0. Issues are expected if a newer version is being used.
You may refer to this release note for more detailed information: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes-2020.html
There are 3 ways to store and load non-frozen models to the MO:
- Checkpoint
- MetaGraph
- SavedModel format
You may refer to the detailed steps in this documentation: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-non-frozen-models-to-the-model-optimizer
For a custom model (especially when a network is defined in the code), you have to create an inference graph file and, to be able to use this file with OpenVINO Model Optimizer, it needs to be frozen. (view the freezing custom model section)
You may use this code to freeze the model and dump it to a file:
import tensorflow as tf
from tensorflow.python.framework import graph_io
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
Probably you could review what the supported Tensorflow model has, as a reference to make yours work.
This model is one of the supported ResNext: https://github.com/taki0112/ResNeXt-Tensorflow
You could find more repos from the Supported Topology section in the documentation I had provided above.
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page