When I adapted this project to the latest openvino version, it seemed that GPU could not inference correctly under fp16
I've located the error .it's mish activation function：
activation_fn=lambda x:x* tf.math.tanh(tf.math.softplus(x))
because yolov4-relu works well
CPU +FP32/FP16 works well
GPU+FP32 works well
GPU+FP16 no bounding boxes!!
we talk about this bug here：
So I think it maybe a bug when GPU inferences FP16's tanh and softplus function
In OpenVINO2020R4 yolov4 is slower than yolov3 because of mish function
But OpenVINO2021.1+tf1.15.4 makes mish function faster.Now the speed of yolov4 is very close to yolov3！
So i think OpenVINO2021.1 optimizes the implementation of mish（tanh and softplus) activation function.
I guess this is a bug in the GPU FP16 when optimizing the activation function
We noticed you have reported the same issue on GitHub, we will continue to investigate and provide updates to this bug through GitHub. This thread will no longer be monitored. If you need any additional information from Intel, please submit a new question.