Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
New Contributor I
137 Views

using a batch for prediction

Hello openvino support dynamic batching
similar is there on other platform like tensorflow, where when you passed a batch of image you get batch of output
but in openvino same option is by enabling dynamic batching.

But dynamic batching doesn't work for all kind of topologies.
my question is how to use batch prediction for models like mobilenet-ssd,  Faster RCNN etc.

0 Kudos
3 Replies
Highlighted
Moderator
137 Views

Hello Prateek.

OpenVINO toolkit supports dynamic batching, however, on only CPU and GPU devices and only on certain topologies that do contain specific supported layers - please find more details here https://docs.openvinotoolkit.org/latest/_docs_IE_DG_DynamicBatching.html
So please make sure your model used doesn't contain unsupported layers.

Best regards, Max.

0 Kudos
Highlighted
New Contributor I
134 Views

hello,
Regarding batch prediction
In benchmarking tool, we can define batchsize and the output is the model latency at the batchsize,
how does that work, Can you please shed some light

0 Kudos
Highlighted
Moderator
124 Views

Hi @pkhan10 

Usually, batching improves the performance. Although, high batch size results in a latency penalty. Depending on your inference device, to achieve best results we recommend you to try different batch size values combining with other parameters (such as -nstreams) in order to find a sweet spot.

Please see more details about it in the following articles:
Performance Topics - https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Intro_to_Performance.html
Optimization Guide - https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html

Hope this helps.
Best regards, Max.

0 Kudos