- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello!
https://github.com/TNTWEN/OpenVINO-YOLOV4
When I adapted this project to the latest openvino version, it seemed that GPU could not inference correctly under fp16
I've located the error .it's mish activation function: activation_fn=lambda x:x* tf.math.tanh(tf.math.softplus(x))
because yolov4-relu works well
Tensorflow1.15.4+OpenVINO2021.1
CPU +FP32/FP16 works well
GPU+FP32 works well
GPU+FP16 no bounding boxes!!
we talk about this bug here:
So I think it maybe a bug when GPU inferences FP16's tanh and softplus function
In OpenVINO2020R4 yolov4 is slower than yolov3 because of mish function
But OpenVINO2021.1+tf1.15.4 makes mish function faster.Now the speed of yolov4 is very close to yolov3!
So i think OpenVINO2021.1 optimizes the implementation of mish(tanh and softplus) activation function.
I guess this is a bug in the GPU FP16 when optimizing the activation function
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tianwen,
Thanks for reaching out to us. We are investigating the issue, and will get back to you.
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tianwen,
We noticed you have reported the same issue on GitHub, we will continue to investigate and provide updates to this bug through GitHub. This thread will no longer be monitored. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page