Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Intel NCS with TensorFlow example

asusi
Beginner
447 Views

Hi community, I'm fairly new to OpenVino (OV) and Intel Neural Compute Stick (NCS)

I'm trying to get the simplest possible custom example working on the Intel NCS (not the NCS2).

I want:

  1. To create a net, train and save the model and weights with TensorFlow (TF)
  2. Convert it to an Intermediate Representation (IR) using Model Optimizer (MO)
  3. Implement it in the hardware NCS to test the performance.

For (1) I'm following this guide: https://www.tensorflow.org/tutorials/keras/save_and_load

It saves the model and the .h5 file. For the dummy example I don't care, in my project I have an external .h5 file so I will need to learn to manage those.

For (2) I'm following this guide: https://docs.openvino.ai/2020.3/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

It seems that the model needs to be frozen (https://docs.openvino.ai/2020.3/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model)

but I'm not able to integrate it in the previous example (tensorflow session, name_of_the_output_node... ¿? too many things not explained)

When done without freezing  this command:

python3 mo_tf.py --input_model <INPUT_MODEL>.pb

got this error

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Adam/dense_11/bias/v/Read/ReadVariableOp.
Original exception message: 'ascii' codec can't decode byte 0xcc in position 1: ordinal not in range(128)

 

Details of my installation:

- OS Windows 10

- OpenVino toolkit version 2020.3

- Python 3.6.5

- TensorFlow 2.4.1

 

Any help would be appreciated.

Thanks in advance.

0 Kudos
1 Solution
Iffa_Intel
Moderator
416 Views

Greetings,

 

Just a reminder, With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.

 

In addition to that, the OpenVINO 2020.3 only supports up to Tensorflow 2.2.0. Issues are expected if a newer version is being used.

You may refer to this release note for more detailed information: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes-2020.html

 

 

There are 3 ways to store and load non-frozen models to the MO:

  1. Checkpoint
  2. MetaGraph
  3. SavedModel format

You may refer to the detailed steps in this documentation: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-non-frozen-models-to-the-model-optimizer

 

 

For a custom model (especially when a network is defined in the code), you have to create an inference graph file and, to be able to use this file with OpenVINO Model Optimizer, it needs to be frozen. (view the freezing custom model section)

 

You may use this code to freeze the model and dump it to a file:

 

import tensorflow as tf

from tensorflow.python.framework import graph_io

frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])

graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)

 

 

Probably you could review what the supported Tensorflow model has, as a reference to make yours work.

This model is one of the supported ResNext: https://github.com/taki0112/ResNeXt-Tensorflow

 

You could find more repos from the Supported Topology section in the documentation I had provided above.

 

 

Sincerely,

Iffa

View solution in original post

2 Replies
Iffa_Intel
Moderator
417 Views

Greetings,

 

Just a reminder, With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.

 

In addition to that, the OpenVINO 2020.3 only supports up to Tensorflow 2.2.0. Issues are expected if a newer version is being used.

You may refer to this release note for more detailed information: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes-2020.html

 

 

There are 3 ways to store and load non-frozen models to the MO:

  1. Checkpoint
  2. MetaGraph
  3. SavedModel format

You may refer to the detailed steps in this documentation: https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-non-frozen-models-to-the-model-optimizer

 

 

For a custom model (especially when a network is defined in the code), you have to create an inference graph file and, to be able to use this file with OpenVINO Model Optimizer, it needs to be frozen. (view the freezing custom model section)

 

You may use this code to freeze the model and dump it to a file:

 

import tensorflow as tf

from tensorflow.python.framework import graph_io

frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])

graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)

 

 

Probably you could review what the supported Tensorflow model has, as a reference to make yours work.

This model is one of the supported ResNext: https://github.com/taki0112/ResNeXt-Tensorflow

 

You could find more repos from the Supported Topology section in the documentation I had provided above.

 

 

Sincerely,

Iffa

Iffa_Intel
Moderator
353 Views

Greetings,


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos
Reply