Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
New Contributor I
121 Views

GPU FP16 inference BUG in OpenVINO2021.1

Hello!

https://github.com/TNTWEN/OpenVINO-YOLOV4

When I adapted this project to the latest openvino version, it seemed that GPU could not inference correctly under fp16

I've located the error .it's mish activation function: activation_fn=lambda x:x* tf.math.tanh(tf.math.softplus(x))
because yolov4-relu works well

Tensorflow1.15.4+OpenVINO2021.1
CPU +FP32/FP16 works well
GPU+FP32 works well
GPU+FP16 no bounding boxes!!

we talk about this bug here:

TNTWEN/OpenVINO-YOLOV4#18

 

So I think it maybe a bug when GPU inferences FP16's tanh and softplus function

 

In OpenVINO2020R4 yolov4 is slower than yolov3 because of mish function
But OpenVINO2021.1+tf1.15.4 makes mish function faster.Now the speed of yolov4 is very close to yolov3!
So i think OpenVINO2021.1 optimizes the implementation of mish(tanh and softplus) activation function.
I guess this is a bug in the GPU FP16 when optimizing the activation function

 

Labels (2)
0 Kudos
2 Replies
Highlighted
Moderator
86 Views

Hi Tianwen,


Thanks for reaching out to us. We are investigating the issue, and will get back to you.


Regards,

Munesh



0 Kudos
Highlighted
Community Manager
29 Views

Hi Tianwen,

We noticed you have reported the same issue on GitHub, we will continue to investigate and provide updates to this bug through GitHub. This thread will no longer be monitored. If you need any additional information from Intel, please submit a new question.

 

Regards,

Aznie

 


0 Kudos