Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Error optimizing tensorflow2 model

New Contributor I


i have trained this model  with tensorflow2 , now i am trying to optimize it with file with below command 

 python --saved_model_dir E:\tensorflow_models\my_mobilenetv2\saved_model\ --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\my_mobilenetv2\pipeline.config --tensorflow_use_custom_operations_config ssd_v2_support.json --data_type FP32 --reverse_input_channels

but i got below error:


Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      None
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2020.3.194\deployment_tools\model_optimizer\.
        - IR output name:       saved_model
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  True
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  E:\tensorflow_models\my_mobilenetv2\pipeline.config
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino_2020.3.194\deployment_tools\model_optimizer\ssd_v2_support.json
Model Optimizer version:
2020-09-04 18:57:47.481179: W tensorflow/stream_executor/platform/default/] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-09-04 18:57:47.481332: I tensorflow/stream_executor/cuda/] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-09-04 18:57:51.581332: W tensorflow/stream_executor/platform/default/] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-09-04 18:57:51.581465: E tensorflow/stream_executor/cuda/] failed call to cuInit: UNKNOWN ERROR (303)
2020-09-04 18:57:51.588243: I tensorflow/stream_executor/cuda/] retrieving CUDA diagnostic information for host: UECLAB1
2020-09-04 18:57:51.588455: I tensorflow/stream_executor/cuda/] hostname: UECLAB1
2020-09-04 18:57:51.589145: I tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class ''>): Unexpected exception happened during extracting attributes for node ssd_mobile_net_v2keras_feature_extractor/FeatureMaps/layer_19_2_Conv2d_5_3x3_s2_128_batchnorm/moving_variance/Read/ReadVariableOp.
Original exception message: 'ascii' codec can't decode byte 0x93 in position 183: ordinal not in range(128)



with --tensorflow_use_custom_operations_config ssd_v2_support.json i also tried  ssd_support_api_v1.15.

previously i have trained a ssd inception v2 module with tensorflow1.15 and optimization with that worked.

so is tensorflow 2 is supported with openvino


Thanks & Regards

Amit Rawat 

Labels (1)
0 Kudos
2 Replies


Openvino does support TF2. You can refer here for more details:

the error 'ascii' codec can't decode byte 0x93 in position 183: ordinal not in range(128) indicates some type of Unicode Decode error. Perhaps you can try these workaround:





Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.