Is any performance (or..) difference between
if (InferenceEngine::OK != inferRequest.Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY))
In video cource, the Man says - it is faster, but without explanation.
Thanks for reaching out to us.
InferenceEngine::InferRequest Class Reference page is available at
InferenceEngine::InferRequest::Infer infers specified input(s) in synchronous mode.
InferenceEngine::InferRequest::StartAsync starts inference of specified input(s) in asynchronous mode.
For more information, please refer to point 7 in the following section:
To estimate deep learning inference performance on supported devices, I would suggest you use Benchmark C++ Tool. Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented). You can use -api command-line parameter to define the inference mode.
This thread will no longer be monitored since we have provided information and suggestion. If you need any additional information from Intel, please submit a new question.