- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I'm using OpenVINO 2020.2.120 on the Intel Gordon Peak board (Yocto linux) to run Object Detection Inference on the ssdlite_mobilenet_v2_coco_2018_05_09 coco model.
The problem is loading the model in C (cl_cache is present) still takes ~ 2.045 seconds on the GPU and ~0.951 seconds on the CPU. Both are deal breakers for me because I am calling the C program from Java (JNI) so I have to take the model load hit every time I need to infer an image.
Does anyone know how I can speed up the model load time on the GPU or CPU?
Thanks!
Mil
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Mil.
Please check this Optimization Guide for different methods on how to improve various performance related characteristics, especially CPU and GPU checklist sections. If you are also taking into account pure performance characteristics such as inference frames per second, then you can run your model with Benchmark Python Tool or Benchmark C++ Tool and compare your results with other devices from the following page to check if yours result are adequate or not https://docs.openvinotoolkit.org/latest/_docs_performance_benchmarks.html ;
Also, please have a chance to try newest OpenVINO toolkit 2020.3 build, since it has some minor performance degradation bugs being fixed comparing to 2020.2 version.
Hope this helps.
Best regards, Max.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page