- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I train a YOLOv3 model(not much different from the official one) with my own dataset. My model is successfully converted into IR for openvino following the official guide. However, the inference performance is much worse compared with that in Darknet. Has anyone been successfully implemented yolov3 model in openvino without much accuracy loss?
cheers
fucheng
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Fucheng,
Could we have more details please? When you mention inference performance I believe you mean accuracy, not inference speed.
Are you running on CPU or GPU FP16 or FP32?
Are you using tiny?
What SDK R4 or R5?
OS ? Windows or Linux?
What size? 608, 416 or 320 ?
Just compared the new SDK R5 accuracy to reference and did not notice too much discrepancy. How are you measuring accuracy?
Thanks,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hello Fucheng,
Could we have more details please? When you mention inference performance I believe you mean accuracy, not inference speed.
Are you running on CPU or GPU FP16 or FP32?
Are you using tiny?
What SDK R4 or R5?
OS ? Windows or Linux?
What size? 608, 416 or 320 ?
Just compared the new SDK R5 accuracy to reference and did not notice too much discrepancy. How are you measuring accuracy?
Thanks,
Nikos
Hi, Nikos,
Yeah, the inference performance is bad. I am running on CPU FP32, SDK R4, size 416, Ubuntu16.04 (cpu i5, 7200u). My model is not much different from original yolov3, just has less layers.
I notice that new SDK R5 has been released. Great work. Is SDK R5 better for darknet model? I will try it
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
> Is SDK R5 better for darknet model? I will try it
I think R5 was slightly faster than R4 on CPU path (could be MKLDNN optimizations). Still most CPUs will only get you 3 to 5 fps for the 608x608 YOLOv3.
Tiny YOLOv3 will run much faster, maybe a good option if you need fast inference speeds - about 85 fps on my CPU. Tiny with FP16 will also run on NCS2 @ about 20 fps or around 100 fps on many GT2 GPUs.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:> Is SDK R5 better for darknet model? I will try it
I think R5 was slightly faster than R4 on CPU path (could be MKLDNN optimizations). Still most CPUs will only get you 3 to 5 fps for the 608x608 YOLOv3.
Tiny YOLOv3 will run much faster, maybe a good option if you need fast inference speeds - about 85 fps on my CPU. Tiny with FP16 will also run on NCS2 @ about 20 fps or around 100 fps on many GT2 GPUs.
Cheers,
Nikos
Thanks. I am still working on the accuracy loss problem. I find that converted tensorflow model from darknet performs badly, thus, I think it also perform badly in openvino. I also test the caffe model in openvino. It works fine without accuracy loss. There is memory leakage for GPU, not for CPU. I test both R4 and R5, the leakage is much worse for R4. I notice that it is claimed the bug is fixed for the newly released R5, but it is not completely.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Fucheng,
Thank you for the update! Sorry, I assumed by "performance" you meant speed.
I am also working on evaluating Tensorflow YOLOv3 accuracy. I am seeing same issue in my applications too. BTW, have not seen the memory leak in R5 yet; will keep an eye on it.
Thanks,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I measured mAP using this model (mystic123/tensorflow-yolo-v3). Measurement was performed by remodeling validation_app of OpenVINO. However, the performance value is different from the paper. Is there a problem with this model?
Please let me know if you notice anything.
mAP measurement conditions:
YOLOv3-416@IoU=0.5
OpenVINO validation_app result:
[ INFO ] InferenceEngine:
API version ............ 1.6
Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.6
Build .................. 22443
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Device: CPU
[ INFO ] Collecting VOC annotations from /home/dla/sumi/coco/annotations_pascalformat
[ INFO ] 5000 annotations collected
[ INFO ] Starting inference
Progress: [....................] 100.00% done
[ INFO ] Processing output blobs
[ INFO ] Inference report:
Network load time: 112.53ms
Model: mo/yolo_v3.xml
Model Precision: FP32
Batch size: 1
Validation dataset: /home/dla/sumi
Validation approach: Object detection network
[ INFO ] Average infer time (ms): 280.48 (3.56532655 images per second with batch size = 1)
Average precision per class table:
Class AP
1 0.329
2 0.268
3 0.173
4 0.426
5 0.618
6 0.618
7 0.812
8 0.355
9 0.213
10 0.091
11 0.453
12 0.452
13 0.347
14 0.320
15 0.210
16 0.848
17 0.632
18 0.492
19 0.301
20 0.291
21 0.582
22 0.729
23 0.543
24 0.632
25 0.117
26 0.350
27 0.099
28 0.179
29 0.297
30 0.182
31 0.197
32 0.257
33 0.091
34 0.157
35 0.258
36 0.091
37 0.356
38 0.312
39 0.312
40 0.091
41 0.165
42 0.174
43 0.305
44 0.159
45 0.130
46 0.336
47 0.191
48 0.122
49 0.310
50 0.151
51 0.212
52 0.091
53 0.216
54 0.410
55 0.166
56 0.323
57 0.211
58 0.671
59 0.215
60 0.706
61 0.460
62 0.690
63 0.504
64 0.620
65 0.165
66 0.091
67 0.575
68 0.208
69 0.403
70 0.510
71 0.036
72 0.378
73 0.565
74 0.091
75 0.264
76 0.219
77 0.429
78 0.481
79 0.051
80 0.169
Mean Average Precision (mAP): 0.3282
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Atsunori, Sumi and everyone,
Please try OpenVino 2019R3 which was just released today. Yolo V3 has been improved in R3 !
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha R,
I re-created the IR in the R3 environment, compiled validation_app, and ran the app, but the accuracy result was still the same (mA = 0.3282). I think the cause of the loss of accuracy is the model itself (mystic123).
----------------------
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0
Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 2.1
Build .................. 30677
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Device: CPU
[ INFO ] Collecting VOC annotations from /home/dla/sumi/coco/annotations_pascalformat
[ INFO ] 5000 annotations collected
[ INFO ] Starting inference
Progress: [....................] 100.00% done
[ INFO ] Processing output blobs
[ INFO ] Inference report:
Network load time: 111.64ms
Model: ./yolo_v3.xml
Model Precision: FP32
Batch size: 1
Validation dataset: /home/dla/sumi
Validation approach: Object detection network
[ INFO ] Average infer time (ms): 220.38 (4.53763972 images per second with batch size = 1)
Average precision per class table:
Class AP
1 0.329
2 0.268
3 0.173
4 0.426
5 0.618
6 0.618
7 0.812
8 0.355
9 0.214
10 0.091
11 0.453
12 0.452
13 0.347
14 0.320
15 0.210
16 0.848
17 0.632
18 0.492
19 0.301
20 0.291
21 0.582
22 0.729
23 0.543
24 0.632
25 0.117
26 0.350
27 0.099
28 0.179
29 0.297
30 0.182
31 0.197
32 0.257
33 0.091
34 0.157
35 0.258
36 0.091
37 0.356
38 0.312
39 0.312
40 0.091
41 0.165
42 0.174
43 0.305
44 0.159
45 0.130
46 0.336
47 0.191
48 0.122
49 0.310
50 0.151
51 0.212
52 0.091
53 0.216
54 0.410
55 0.166
56 0.323
57 0.211
58 0.671
59 0.215
60 0.706
61 0.460
62 0.690
63 0.504
64 0.620
65 0.165
66 0.091
67 0.575
68 0.208
69 0.403
70 0.510
71 0.036
72 0.378
73 0.565
74 0.091
75 0.264
76 0.219
77 0.429
78 0.481
79 0.051
80 0.169
Mean Average Precision (mAP): 0.3282
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Atsunori, Sumi,
Your API version still shows API version ............ 2.1 which means that you are not using OpenVino 2019R3. Please regenerate IR and also recompile your inference code in R3.
How does mAP look with some other tool (not OpenVino) ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha R,
As for the openvino (2019R3) version, I installed it according to the manual and there is no problem. The environment I use should be R3
$ ls -lt /opt/intel/openvino
lrwxrwxrwx 1 root root 30 10\u6708 7 20:35 /opt/intel/openvino -> /opt/intel/openvino_2019.3.334
$ cat /opt/intel/openvino/inference_engine/version.txt
Mon Sep 16 23:38:25 MSK 2019
cb6cad9663aea3d282e0e8b3e0bf359df665d5d0
By the way, please tell me the expected value of "API version".
For mAP measurement, validaiotn_app was modified to support yolov3.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Shubha R. (Intel) wrote:Dear Atsunori, Sumi,
Your API version still shows API version ............ 2.1 which means that you are not using OpenVino 2019R3. Please regenerate IR and also recompile your inference code in R3.
How does mAP look with some other tool (not OpenVino) ?
Thanks,
Shubha
I hava same error and I update to R3 too,but the info:
InferenceEngine:
API version ............ 2.1
Build .................. 30677
Description ....... API
What should I do?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Atsunori, Sumi wrote:Dear Shubha R,
I re-created the IR in the R3 environment, compiled validation_app, and ran the app, but the accuracy result was still the same (mA = 0.3282). I think the cause of the loss of accuracy is the model itself (mystic123).
----------------------
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0
Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Loading pluginAPI version ............ 2.1
Build .................. 30677
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Device: CPU
[ INFO ] Collecting VOC annotations from /home/dla/sumi/coco/annotations_pascalformat
[ INFO ] 5000 annotations collected
[ INFO ] Starting inference
Progress: [....................] 100.00% done
[ INFO ] Processing output blobs
[ INFO ] Inference report:
Network load time: 111.64ms
Model: ./yolo_v3.xml
Model Precision: FP32
Batch size: 1
Validation dataset: /home/dla/sumi
Validation approach: Object detection network
[ INFO ] Average infer time (ms): 220.38 (4.53763972 images per second with batch size = 1)
Average precision per class table:Class AP
1 0.329
2 0.268
3 0.173
4 0.426
5 0.618
6 0.618
7 0.812
8 0.355
9 0.214
10 0.091
11 0.453
12 0.452
13 0.347
14 0.320
15 0.210
16 0.848
17 0.632
18 0.492
19 0.301
20 0.291
21 0.582
22 0.729
23 0.543
24 0.632
25 0.117
26 0.350
27 0.099
28 0.179
29 0.297
30 0.182
31 0.197
32 0.257
33 0.091
34 0.157
35 0.258
36 0.091
37 0.356
38 0.312
39 0.312
40 0.091
41 0.165
42 0.174
43 0.305
44 0.159
45 0.130
46 0.336
47 0.191
48 0.122
49 0.310
50 0.151
51 0.212
52 0.091
53 0.216
54 0.410
55 0.166
56 0.323
57 0.211
58 0.671
59 0.215
60 0.706
61 0.460
62 0.690
63 0.504
64 0.620
65 0.165
66 0.091
67 0.575
68 0.208
69 0.403
70 0.510
71 0.036
72 0.378
73 0.565
74 0.091
75 0.264
76 0.219
77 0.429
78 0.481
79 0.051
80 0.169Mean Average Precision (mAP): 0.3282
Hi,
Where is the "validation_app"
I can not found it,
Can you tell me?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't know why, validation_app disappeared from 2019R3.
I am measuring mAP by modifying what was in the 2019R1 sample.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Atsunori, Sumi wrote:I don't know why, validation_app disappeared from 2019R3.
I am measuring mAP by modifying what was in the 2019R1 sample.
Thanks
Yep
I also found out that there is no R3 or R2.
Thank for you reply
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page