- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am using Openvino_2020.2.120 for single image, optimized inference from a tensorflow model for both Faster RCNN and Mask RCNN architecture (using CPU). So for mask RCNN architecture, I am getting the inference time (for single image) and it is coming like 4960 mS. And for Faster RCNN architecture, I am getting the inference time (for single image) and it is coming like around 600 mS. Can somebody kindly give an idea about the inference time they got while running the Openvino optimized inference for tensorflow model of Faster RCNN and/or Mask RCNN ARchitectures? Any help is highly appreciated. Thanks in advance!!
[Note: I have mentioned the time taken only for running the actual inference portion of the code. The code I used for mask RCNN is the one under the path /opt/intel/openvino_2020.2.120/deployment_tools/open_model_zoo/demos/mask_rcnn_demo/, which is meant for tensorflow. I used the same code for Faster RCNN also, of course with some minor changes, because of the fact that the object_detection_demo_faster_rcnn folder under the same path contains the code for the Caffe(not tensorflow) Model for Faster RCNN Architecture.]
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nandita,
The following page,'Get a Deep Learning Model Performance Boost with Intel® Platforms', contains benchmarks that demonstrate high performance gains on several public neural networks for a streamlined, quick deployment on Intel® CPU, VPU and FPGA platforms.
https://docs.openvinotoolkit.org/2020.3/_docs_performance_benchmarks.html
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page