[Sorry I did not get a notification for the reply to the previous thread "Openvino Benchmark Performance". So starting a new thread. Thanks a lot to Aznie for getting back to me in the previous thread.]
I am using Openvino to benchmark an edge device that I was given from my lab for research purposes. It has a CPU and a VPU to accelerate the inference of AI applications.
Link to the CPU spec used in the device: https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-...
I am supposed to use the openvino python benchmark application to measure the throughput and latency of the device.
But while running the benchmark application, I saw that the numbers I am getting are a lot less than the performance mentioned here.
I am using model weights made available by openvino model zoo to make sure there is nothing wrong with the model conversion part.
Model name: resnet_v1-50
Throughput: 45.10 fps
Latency: 88.42 ms
Model name: ssd_mobilenet_v1_coco_2018_01_28
Throughput: 99.00 fps
Latency: 39.95 ms
Could you point me in the direction of improving the scores? Thanks in advance.
In that case, I suggest for you to research and try this Post-Training Optimization Toolkit (POT) which designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization
You may refer here: https://docs.openvinotoolkit.org/latest/pot_README.html
Make sure to follow the installation guide before proceeding further: https://docs.openvinotoolkit.org/latest/pot_InstallationGuide.html
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.