Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Conversion custom SSD MobileNet V2 FPNLite 320x320 from TensorFlow2 model zoo to OpenVINO

DarkHorse
Employee
1,383 Views

Hello,

 

I am trying to convert SSD Mobilenet V2 FPNlite to run in OpenVINO

 

https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1

 

I tried to follow the documentation here but seems unsuccessful

 

https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

 

Do you guys have the complete steps ?

Seems like the saved_model.pb is not a binary or text file for frozen graph.

 

Any idea?

 

Thanks

 

0 Kudos
3 Replies
DarkHorse
Employee
1,374 Views

Hello,

 

I am using OpenVINO_2021.4.582 and i used the following command:

 

python mo.py --saved_model_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\saved_model" --transformations_config "C:\Program Files (x86)\intel\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.4.json" --tensorflow_object_detection_api_pipeline_config "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\pipeline.config" --reverse_input_channels --scale 127.5 --mean_values [127.5,127.5,127.5] --output_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\IR_Models" --model_name ssd_mobilenet_v2_fpnlite_FP32 --data_type=FP32

 

seems like i am getting this error messages:

C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer>python mo.py --saved_model_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\saved_model" --transformations_config "C:\Program Files (x86)\intel\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.4.json" --tensorflow_object_detection_api_pipeline_config "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\pipeline.config" --reverse_input_channels --scale 127.5 --mean_values [127.5,127.5,127.5] --output_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\IR_Models" --model_name ssd_mobilenet_v2_fpnlite_FP32 --data_type=FP32
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\IR_Models
- IR output name: ssd_mobilenet_v2_fpnlite_FP32
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: [127.5,127.5,127.5]
- Scale values: Not specified
- Scale factor: 127.5
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\pipeline.config
- Use the config file: None
- Inference Engine found in: C:\Program Files (x86)\intel\openvino_2021\python\python3.7\openvino
Inference Engine version: 2021.4.0-3839-cd81789d294-releases/2021/4
Model Optimizer version: 2021.4.0-3839-cd81789d294-releases/2021/4
2021-12-09 08:37:04.203719: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-12-09 08:37:04.203834: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
C:\Users\allensen\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-12-09 08:37:08.348186: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-12-09 08:37:08.349578: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2021-12-09 08:37:08.350523: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-12-09 08:37:08.355976: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: allensen-mobl3
2021-12-09 08:37:08.357734: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: allensen-mobl3
2021-12-09 08:37:08.358580: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-09 08:37:08.360041: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-12-09 08:37:20.513687: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-12-09 08:37:20.514136: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-12-09 08:37:20.524075: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-12-09 08:37:21.288462: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 9281 nodes (8872), 11502 edges (11086), time = 495.234ms.
function_optimizer: function_optimizer did nothing. time = 10.882ms.

[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>)":
[ ERROR ] Traceback (most recent call last):
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 276, in apply_transform
replacer.find_and_replace_pattern(graph)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\front\tf\replacement.py", line 36, in find_and_replace_pattern
self.transform_graph(graph, desc._replacement_desc['custom_attributes'])
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\extensions\front\tf\ObjectDetectionAPI.py", line 710, in transform_graph
assert len(start_nodes) >= 1
AssertionError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\main.py", line 394, in main
ret_code = driver(argv)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\main.py", line 356, in driver
ret_res = emit_ir(prepare_ir(argv), argv)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\main.py", line 252, in prepare_ir
graph = unified_pipeline(argv)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\pipeline\unified.py", line 17, in unified_pipeline
class_registration.ClassType.BACK_REPLACER
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 328, in apply_replacements
apply_replacements_list(graph, replacers_order)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 318, in apply_replacements_list
num_transforms=len(replacers_order))
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\logger.py", line 111, in wrapper
function(*args, **kwargs)
File "C:\Program Files (x86)\intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 306, in apply_transform
)) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>)":

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

0 Kudos
DarkHorse
Employee
1,368 Views

Hello,

 

I managed to convert using ssd_support_api_v2.0.json.

 

python mo_tf.py --saved_model_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\saved_model" --transformations_config="C:\Program Files (x86)\intel\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config="C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\Models\ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8\pipeline.config" --output_dir "C:\Users\allensen\OneDrive - Intel Corporation\Documents\Intel\IR_Models" -model_name ssd_mobilenet_v2_fpnlite

 

I found the solution here:

https://github.com/openvinotoolkit/openvino/issues/8088

 

The ssd_support_api_v.2.0.json is for SSD topologies trained using the TensorFlow* Object Detection API version 2.0 up to 2.3.X inclusively.
Meanwhile, the ssd_support_api_v.2.4.json is for SSD topologies trained using the TensorFlow* Object Detection API version 2.4 or higher.

 

Thanks again.

0 Kudos
Hairul_Intel
Moderator
1,352 Views

Hi DarkHorse,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Hairul


0 Kudos
Reply