Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Yolo v3 performance on different datasets

Leini__Mikk
Beginner
347 Views

I'd like to use Raspberry Pi 3B and Intel NCS2 (Myriad chip) to do object detection with Yolo v3 on Open Images dataset. I used OpenVINO R5 Python example of Yolo v3 (object_detection_demo_yolov3.py). I downloaded the darknet pretrained weights, converted to TF protobuf files and then with model optimizer to IR format. The problem is that with the Open Images inference takes ~3000 ms while with COCO dataset it takes ~820 ms. Note: I modified the time calculation, the original example didn't calculate in the time it waits for the async result.

Anyway, Open images has 601 classes, COCO has 80 but this matters in few layers Yolo layers only. Darknet implementation reports that Open images has 148.781 BFLOPS and COCO has 140.692. Not so big FLOPS difference to cause 3x inference time difference. Is there some explanation to this?

Could it be that those Yolo layers are calculated in CPU and not in SHAVE cores (even though https://software.intel.com/en-us/articles/OpenVINO-InferEngine#inpage-nav-10-2-1 says RegionYolo is supported by Myriad)?

0 Kudos
0 Replies
Reply