Intel® Compute Stick
Discussions Regarding Intel® Compute Sticks and Cards
548 Discussions

Open VINO support for TensorFlow Lite

DevendraDeval
Beginner
2,611 Views

Hello Team, 

I am trying to convert a TensorFlow Lite model to the IR file representation using the but getting the error as shown below. I converted this TensorFlow 2 model to TensorFlow lite using the Lite converter. I am able to run a successful inference with the TensorFlow 2 model. Please have a check and let me know if anything is required

(dl4cv) kitetsu@kitetsu-VirtualBox:/opt/intel/openvino/deployment_tools/model_optimizer$ python3 mo_tf.py --saved_model_dir /home/kitetsu/model --output_dir /home/kitetsu/Downloads/ --input_shape [1,128,128,3] --scale 255
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/kitetsu/Downloads/
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,128,128,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: 255.0
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version:
2021-03-04 16:33:27.137105: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2020.4.287/data_processing/dl_streamer/lib:/opt/intel/openvino_2020.4.287/data_processing/gstreamer/lib:/opt/intel/openvino_2020.4.287/opencv/lib:/opt/intel/openvino_2020.4.287/deployment_tools/ngraph/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/hddl_unite/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/lib/intel64:
2021-03-04 16:33:27.137214: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ FRAMEWORK ERROR ] Cannot load input model: SavedModel format load failure: SavedModel file does not exist at: /home/kitetsu/model/{saved_model.pbtxt|saved_model.pb}

0 Kudos
6 Replies
Peh_Intel
Moderator
2,572 Views

Hi Deval,


Greetings to you.


I observed that you are getting error about SavedModel format load failure, which means the Model Optimizer is unable to find .pb file to load. Could you please check your .pb file in your directory:  

 

/home/kitetsu/model

 

Note: The name of the directory can be any name but the .pb file in the directory must be named as saved_model.pb.


You may refer to this README file under the section “The SavedModel Format”.



Regards,

Peh


0 Kudos
DevendraDeval
Beginner
2,562 Views

Hello Peh,

Thank you for the reply, as instructed I renamed the file from Cats_and_Dogs.tflite to saved_model.pb. But I am getting a new kind of error now. I created the original model using TensorFlow 2 and then converted the model to TensorFlow Lite format (INT8). I was able to create the IR files of the original and perform inference of the TensorFlow 2 model. But unable to create the IR files for the TensorFlow lite model using the model optimizer.

Does the model optimizer provide support for TensorFlow Lite?

Regards,

Devendra Deval

Code:-

(dl4cv) kitetsu@kitetsu-VirtualBox:/opt/intel/openvino/deployment_tools/model_optimizer$ python3 mo_tf.py --saved_model_dir /home/kitetsu/model/ --output_dir /home/kitetsu/Downloads/ --input_shape [1,224,224,3] --scale 255
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/kitetsu/Downloads/
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,224,224,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: 255.0
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version:
2021-03-07 01:59:28.397650: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2020.4.287/data_processing/dl_streamer/lib:/opt/intel/openvino_2020.4.287/data_processing/gstreamer/lib:/opt/intel/openvino_2020.4.287/opencv/lib:/opt/intel/openvino_2020.4.287/deployment_tools/ngraph/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/hddl_unite/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2020.4.287/deployment_tools/inference_engine/lib/intel64:
2021-03-07 01:59:28.397762: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/home/kitetsu/.local/bin/.virtualenvs/dl4cv/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py:99: RuntimeWarning: Unexpected end-group tag: Not all data was converted
saved_model.ParseFromString(file_content)
[ FRAMEWORK ERROR ] Cannot load input model: SavedModel format load failure: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel.

0 Kudos
Peh_Intel
Moderator
2,529 Views

Hi Deval,


Greetings to you.


Model Optimizer does support few frozen quantized topologies which are hosted on the TensorFlow Lite site.


You may refer to the section “Supported Frozen Quantized Topologies” in the link below for more information:

https://docs.openvinotoolkit.org/2021.2/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#supported_topologies


Other TensorFlow Lite model haven’t been validated for support by Model Optimizer. However, you can give a try to convert but there is no guarantee that it will work.


For your current error, perhaps you need to specify the tags using this command --saved_model_tags SAVED_MODEL_TAGS while converting into IR.


You may refer to the link below for more information of the command:

https://docs.openvinotoolkit.org/2021.2/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#tensorflow_specific_conversion_params

 


Regards,

Peh


0 Kudos
DevendraDeval
Beginner
2,525 Views

Hello Peh,

Thank you for the reply, The model I created is using the transfer learning technique and fine-tuning the MobileNet V2 with a custom head. As instructed by you I used the tf.saved_model_cli show to get the tag information and passed it to the model optimizer using the --saved_model_tags. Even though I am passing the correct parameter I am getting the error displayed below. Please have a check. I have added the screenshot of the tf.saved_model_cli show output

Regards,

Devendra Deval

DevendraDeval_0-1615385391600.png

 Error:

python3 mo_tf.py --saved_model_dir /home/kitetsu/Fire_detection_model/ --output_dir /home/kitetsu/Downloads/ --input_shape [1,224,224,3] --scale 255 --saved_model_tags 'serve'
/opt/intel/openvino_2021.2.185/deployment_tools/model_optimizer/mo/main.py:85: SyntaxWarning: "is" with a literal. Did you mean "=="?
if op is 'k':
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/kitetsu/Downloads/
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,224,224,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: 255.0
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version: 2021.2.0-1877-176bdf51370-releases/2021/2
2021-03-10 05:55:24.332357: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64
2021-03-10 05:55:24.332406: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/loader_impl.py:98: RuntimeWarning: Unexpected end-group tag: Not all data was converted
saved_model.ParseFromString(file_content)
[ FRAMEWORK ERROR ] Cannot load input model: SavedModel format load failure: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
available_tags: []

0 Kudos
Peh_Intel
Moderator
2,487 Views

Hi Deval,

 

I’m sorry that I overlooked your renaming of Cats_and_Dogs.tflite to saved_model.pb. I suggested renaming to saved_model.pb based on your initial error. It is only allowed to rename a .pb file to .pb file but not from a .tflite file to .pb file.

 

Also, answering your previous question, Model Optimizer does not support .tflite file to IR format conversion natively. As I mentioned previously, Model Optimizer currently only supports few frozen quantized topologies available from the subsection “Supported Frozen Quantized Topologies”, under the section Supported Topologies.

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#supported_topologies

 

However, the frozen model file (.pb file) is required for conversion to IR format by Model Optimizer. This is also mentioned at the same link above.

 

I’m sorry to inform you that your .tflite file is unable to converted to IR format.

 

 

Regards,

Peh


0 Kudos
Peh_Intel
Moderator
2,426 Views

Hi Deval,

 

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.

 

 

Regards,

Peh


0 Kudos
Reply