I am trying to run YOLOv4 object detection model with the MS COCO dataset on the DL Workbench. The initial FP32 runs show somewhere between 50-60 FPS after the inference completes. This is a lot higher than what I am getting with the demo inference scripts. Is there any particular reason for this or any obvious point that I am missing out on?
連結已複製
Hi @sovit ,
The inference results may vary on many factors: the tool you are using, load of your machine, the usage of the OpenVINO API, etc. Could you provide more details on the script you are using or the script itself?
DL Workbench uses the official tool for benchmarking - Benchmark Tool (https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html). The demo inference scripts are mainly for educational purposes and the performance is not guaranteed.
If you have any additional questions or proposals, please ask them at the official DL Workbench Discussion Forum: https://github.com/openvinotoolkit/workbench_feedback/discussions.
