Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
43 Views

Infer vs StartAsync

Hello...

Is any performance (or..) difference between 

....

inferRequest.Infer();

...

vs

...

inferRequest.StartAsync();

if (InferenceEngine::OK != inferRequest.Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))

....

 

In video cource, the Man says - it is faster, but without explanation.

 

0 Kudos
1 Reply
Highlighted
Moderator
17 Views

Hi Sergey,


Thanks for reaching out to us.


InferenceEngine::InferRequest Class Reference page is available at

https://docs.openvinotoolkit.org/2021.1/classInferenceEngine_1_1InferRequest.html


InferenceEngine::InferRequest::Infer  infers specified input(s) in synchronous mode.

https://docs.openvinotoolkit.org/2021.1/classInferenceEngine_1_1InferRequest.html#a3391ce30894abde73...


InferenceEngine::InferRequest::StartAsync starts inference of specified input(s) in asynchronous mode.

https://docs.openvinotoolkit.org/2021.1/classInferenceEngine_1_1InferRequest.html#a405293e8423d82a5b...


For more information, please refer to point 7 in the following section:

https://docs.openvinotoolkit.org/2021.1/openvino_docs_IE_DG_Integrate_with_customer_application_new_...


To estimate deep learning inference performance on supported devices, I would suggest you use Benchmark C++ Tool. Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented). You can use -api command-line parameter to define the inference mode.

https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html



Regards,

Munesh


0 Kudos