Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

using a batch for prediction

pkhan10
New Contributor I
864 Views

Hello openvino support dynamic batching
similar is there on other platform like tensorflow, where when you passed a batch of image you get batch of output
but in openvino same option is by enabling dynamic batching.

But dynamic batching doesn't work for all kind of topologies.
my question is how to use batch prediction for models like mobilenet-ssd,  Faster RCNN etc.

0 Kudos
3 Replies
Max_L_Intel
Moderator
864 Views

Hello Prateek.

OpenVINO toolkit supports dynamic batching, however, on only CPU and GPU devices and only on certain topologies that do contain specific supported layers - please find more details here https://docs.openvinotoolkit.org/latest/_docs_IE_DG_DynamicBatching.html
So please make sure your model used doesn't contain unsupported layers.

Best regards, Max.

0 Kudos
pkhan10
New Contributor I
861 Views

hello,
Regarding batch prediction
In benchmarking tool, we can define batchsize and the output is the model latency at the batchsize,
how does that work, Can you please shed some light

0 Kudos
Max_L_Intel
Moderator
851 Views

Hi @pkhan10 

Usually, batching improves the performance. Although, high batch size results in a latency penalty. Depending on your inference device, to achieve best results we recommend you to try different batch size values combining with other parameters (such as -nstreams) in order to find a sweet spot.

Please see more details about it in the following articles:
Performance Topics - https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Intro_to_Performance.html
Optimization Guide - https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html

Hope this helps.
Best regards, Max.

0 Kudos
Reply