Community
cancel
Showing results for 
Search instead for 
Did you mean: 
AR92
New Contributor I
339 Views

Exception occurred during running replacer "ObjectDetectionAPIPreprocessorReplacement (<class 'exten

Jump to solution

hi i trained my model with ssd_inception_v2_coco with tensorflow 1.15.3 on  a gpu machine with ubuntu installed,

i have exported my model as frozen graph

in my windows 10 64bit machine i setup openvino latest one and tried running model optimizer   with below command (tensorflow 1.15.3)

python mo_tf.py --input_model E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\pipeline.config --tensorflow_use_custom_operations_config ssd_v2_support.json --input_shape [225,400]

i also tried with ssd_support_api_v1.15.json and ssd_v2_support.json below is debug log

input_shape 225 is height and 400 is width

please suggest how can i make it work  

i have attached error log please have a look.

Tags (1)
0 Kudos

Accepted Solutions
Max_L_Intel
Moderator
306 Views

Hi @AR92 

The cause might be in incorrect JSON configuration file. Can you kindly try it with ssd_support_api_v.1.15.json instead of ssd_v2_support.json?

Also, please try with adding the parameter --reverse_input_channels to your command.

Best regards, Max.

 

View solution in original post

Tags (1)
5 Replies
Max_L_Intel
Moderator
307 Views

Hi @AR92 

The cause might be in incorrect JSON configuration file. Can you kindly try it with ssd_support_api_v.1.15.json instead of ssd_v2_support.json?

Also, please try with adding the parameter --reverse_input_channels to your command.

Best regards, Max.

 

View solution in original post

Tags (1)
AR92
New Contributor I
297 Views

hi thanks for your reply , i got it working with below command

 

python mo_tf.py --input_model E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\pipeline.config --tensorflow_use_custom_operations_config ssd_support_api_v1.15.json --data_type FP16

 

i checked generated .bin and .xml files, with python sample application \openvino_2020.3.194\inference_engine\demos\python_demos\object_detection_demo_ssd_async  which is working, 

i have another project which requires libcpu_extension.dll and gave error , but sample projects are working but i didn't find any libcpu_extension.dll file in my openvino installation directory.  is it good to use for better performance ? please give some guidance how can i build it   

 

 

Thanks

 

Max_L_Intel
Moderator
289 Views

Hi @AR92 

We are glad that the TensorFlow model conversion finally works for you. Thanks for reporting this back to the community!

With regards to libcpu_extension.dll, since OpenVINO toolkit 2020.1 release the CPU extensions library was moved into the plugin (libMKLDNNPlugin.so). Please refer the release notes for additional changes.
So you need to use this plugin file in your project. Or if you have a hard dependency, then you can try one of the previously released OpenVINO toolkit builds (e.g. 2019 R3).

Hope this helps.
Best regards, Max.

 

AR92
New Contributor I
284 Views
hi, does libcpu_extention give better performance?,

Thanks
Amit Rawat
Max_L_Intel
Moderator
222 Views

Hi @AR92 

We don't have direct performance comparison between CPU extensions being implemented in separate libcpu extension library and embedded within the plugin, but in later OpenVINO toolkit releases we have enhancements and different bugs fixed related to performance degradation as well, comparing to previous OpenVINO releases. Hence we always recommend to use latest available OpenVINO toolkit build.

For your reference you can also take a look at different performance benchmark values for different CPU devices and models for OpenVINO toolkit version with CPU extensions implemented as separate libcpu_extension (2019 R3) and the embedded one (2020.3). I think you might be interested in throughput values there.

Hope this helps.
Best regards, Max.