- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi i trained my model with ssd_inception_v2_coco with tensorflow 1.15.3 on a gpu machine with ubuntu installed,
i have exported my model as frozen graph
in my windows 10 64bit machine i setup openvino latest one and tried running model optimizer with below command (tensorflow 1.15.3)
python mo_tf.py --input_model E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\pipeline.config --tensorflow_use_custom_operations_config ssd_v2_support.json --input_shape [225,400]
i also tried with ssd_support_api_v1.15.json and ssd_v2_support.json below is debug log
input_shape 225 is height and 400 is width
please suggest how can i make it work
i have attached error log please have a look.
- Tags:
- OpenVino
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi thanks for your reply , i got it working with below command
python mo_tf.py --input_model E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\pipeline.config --tensorflow_use_custom_operations_config ssd_support_api_v1.15.json --data_type FP16
i checked generated .bin and .xml files, with python sample application \openvino_2020.3.194\inference_engine\demos\python_demos\object_detection_demo_ssd_async which is working,
i have another project which requires libcpu_extension.dll and gave error , but sample projects are working but i didn't find any libcpu_extension.dll file in my openvino installation directory. is it good to use for better performance ? please give some guidance how can i build it
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @AR92
We are glad that the TensorFlow model conversion finally works for you. Thanks for reporting this back to the community!
With regards to libcpu_extension.dll, since OpenVINO toolkit 2020.1 release the CPU extensions library was moved into the plugin (libMKLDNNPlugin.so). Please refer the release notes for additional changes.
So you need to use this plugin file in your project. Or if you have a hard dependency, then you can try one of the previously released OpenVINO toolkit builds (e.g. 2019 R3).
Hope this helps.
Best regards, Max.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks
Amit Rawat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @AR92
We don't have direct performance comparison between CPU extensions being implemented in separate libcpu extension library and embedded within the plugin, but in later OpenVINO toolkit releases we have enhancements and different bugs fixed related to performance degradation as well, comparing to previous OpenVINO releases. Hence we always recommend to use latest available OpenVINO toolkit build.
For your reference you can also take a look at different performance benchmark values for different CPU devices and models for OpenVINO toolkit version with CPU extensions implemented as separate libcpu_extension (2019 R3) and the embedded one (2020.3). I think you might be interested in throughput values there.
Hope this helps.
Best regards, Max.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page