Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Async inference results

Lucho
New Contributor I
1,956 Views

Hi,

working with openvino_2021.4.689 and python. We are not able to get the same results after changing from synchronous inference to asynchronous.

 

synchronoues inference:

 

face_neural_net = ie.read_network(model=face_model_xml, weights=face_model_bin)
if face_neural_net is not None:
	face_input_blob = next(iter(face_neural_net.input_info))
	face_neural_net.batch_size = 1
	face_execution_net = ie.load_network(
	    network=face_neural_net, device_name=device.upper()
	)

face_blob = cv2.dnn.blobFromImage(
frame, size=(MODEL_FRAME_SIZE, MODEL_FRAME_SIZE), ddepth=cv2.CV_8U
)
face_results = face_execution_net.infer(inputs={face_input_blob: face_blob})

 

 

asynchronous inference:

 

face_neural_net = ie.read_network(model=face_model_xml, weights=face_model_bin)
if face_neural_net is not None:
	face_input_blob = next(iter(face_neural_net.input_info))
	face_output_blob = next(iter(face_neural_net.outputs))
	face_neural_net.batch_size = 1
	face_execution_net = ie.load_network(
	    network=face_neural_net, device_name=device.upper(), num_requests=0
	)

face_blob = cv2.dnn.blobFromImage(
frame, size=(MODEL_FRAME_SIZE, MODEL_FRAME_SIZE), ddepth=cv2.CV_8U)

face_execution_net.requests[0].async_infer({face_input_blob: face_blob})

while face_execution_net.requests[0].wait(0) != StatusCode.OK:
	sleep(1)

face_results = face_execution_net.requests[0].output_blobs[face_output_blob].buffer

 

 

While in the sync inference we are getting a dictionary with the expected results, in the async inference we are only getting an array with the confidence level for each detection.

 

The model output:

 

output.png

 

Maybe it is related with output blob configuration, but we were not able to find an example.

 

Thanks and Regards,

 

Luciano

0 Kudos
1 Solution
IntelSupport
Community Manager
1,853 Views

Hi Lucho,

You will need to implement the Async_infer API to start the asynchronous inference in your code. InferRequest.async_infer, InferRequest.wait, and Blob.buffer are the API for the asynchrounous Infer features.

 

You can refer to the Image Classification Async Python* Sample that demonstrates how to do inference of image classification networks using Asynchronous Inference Request API.

 

Regards,

Aznie


View solution in original post

6 Replies
IntelSupport
Community Manager
1,920 Views

Hi Lucho,

 

Thanks for reaching out.

 

The face_detection.py is not official from our developer and has never been validated from our side. We cannot verify the validated output for the inference. Basically, when the application ran in the synchronous mode, it creates one infer request and executes the infer method. If you run the application in the asynchronous mode, it creates as many infer requests. The asynchronous approach runs multiple inferences in a parallel pipeline. That may lead to a higher throughput is than the synchronous approach.

 

Check out the following OpenVINO documentation for more information.

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Integrate_with_customer_application_new_API.html

 

Meanwhile, which example are you looking for that related to the output blob?

 

Regards,

Azni


0 Kudos
Lucho
New Contributor I
1,904 Views

Hi Azni,

thanks for the reply. face_detection.py is a script that I wrote and I am sharing with you to show how I'm trying to use the model. I will look into the documentation.

Regards,

Luciano

0 Kudos
Lucho
New Contributor I
1,901 Views

From the documentation, step 2:

for name, info in face_neural_net.outputs.items():
        print("\tname: {}".format(name))
        print("\tshape: {}".format(info.shape))
        print("\tlayout: {}".format(info.layout))
        print("\tprecision: {}\n".format(info.precision))

I'm getting:

        name: TopK_2434.0
        shape: [750]
        layout: C
        precision: FP32

        name: boxes
        shape: [750, 5]
        layout: NC
        precision: FP32

        name: labels
        shape: [750]
        layout: C
        precision: I32

 

But there is no info on how to get each output from async infer output buffer.

0 Kudos
Lucho
New Contributor I
1,862 Views

I will need technical support for this. Is there any other documentation related to async inference that you could point me out?

0 Kudos
IntelSupport
Community Manager
1,854 Views

Hi Lucho,

You will need to implement the Async_infer API to start the asynchronous inference in your code. InferRequest.async_infer, InferRequest.wait, and Blob.buffer are the API for the asynchrounous Infer features.

 

You can refer to the Image Classification Async Python* Sample that demonstrates how to do inference of image classification networks using Asynchronous Inference Request API.

 

Regards,

Aznie


Wan_Intel
Moderator
1,783 Views

Hi Lucho,


This thread will no longer be monitored since this issue has been resolved. 

If you need any additional information from Intel, please submit a new question.



Regards,

Wan


0 Kudos
Reply