Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

Performance drop when converting models utilizing batchnorm layers with model optimizer with FP16 datatype

Wig__Kai_A
Beginner
691 Views

I experience performance drop when using the model optimizer mo.py to convert models that are using batchnorm layers, when the output data type is float16. When the data type is float32 there is no issue. 

I first saw this using custom a custom yolo model and our own testing scripts, but also reproduced with the mobilenet-ssd model found in the model zoo and with scoring on the Pascal VOC 2007 testing set.

link: https://github.com/chuanqi305/ssd

In both cases a caffe model was used as input to the model optimizer. Issue seen when running on device MYRIAD (MyriadX).

By default the model optimizer fuses batch norm layers and I suspect that combined with having float 16 datatype, something goes wrong. When disabling fusing with the "--disable_fusing" option the output score is fine, but then of course the latency is twice as high. I test while setting the confidence_threshold in the deploy.prototxt to 0.01 to test over the whole confidence range. The difference in score is then bigger.

Mobilenet SSD, Pascal VOC 2007, FP16 on MYRIAD without fusing: 74.01 mAP

Mobilenet SSD, Pascal VOC 2007, FP16 on MYRIAD with fusing: 73.38 mAP

0 Kudos
6 Replies
Shubha_R_Intel
Employee
691 Views

Dear Wig, Kai A,

This is a very interesting problem. Please give me some time to investigate and debug it.

I will report back on this forum.

Thanks,

Shubha

0 Kudos
Wig__Kai_A
Beginner
691 Views

Thanks for looking into this issue. Please let me know if you are not able to reproduce the issue.

Thanks 

-Kai

0 Kudos
Shubha_R_Intel
Employee
691 Views

Dear Wig, Kai A,

I am sorry but i haven't had a chance to reproduce this issue yet. But we just released OpenVino 2019R3. Can you kindly try it again ? Perhaps the problem is fixed in R3.

Let me know,

Thanks,

Shubha

0 Kudos
Wig__Kai_A
Beginner
691 Views

Hi Shuba, so far I have tested R1, R2 and R3 releases. Issue is seen on all these releases. 

I can add that I also tested performing BN fusing 'manually' before using model optimizer. Again there is performance drop when running on compute stick. I guess that means the issue might be with either the Myriad plugin or execution on the device?

-Kai

0 Kudos
Shubha_R_Intel
Employee
691 Views

Dear Wig, Kai A,

Thanks for your due diligence in testing across R1, R2, and R3. I really appreciate it. 

Let me investigate this further. I will post my findings here.

Sincerely,

Shubha

0 Kudos
Wig__Kai_A
Beginner
691 Views

Hi, have you had time to investigate?

Thanks, Kai

0 Kudos
Reply