Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

[REQUEST_BUSY] error in async infer

pkhan10
New Contributor I
4,209 Views

hello ,

as per documentation write way to async infer code

exec_net = plugin.load(network=net, num_requests=2)
exec_net.requests[0].async_infer({input_blob: image})
request_status = exec_net.requests[0].wait()
res = exec_net.requests[0].outputs['prob']

I used the same I was getting request busy error some times. 
So  I put time.sleep(.05) until  exec_net.requests[request_id].wait()!=0

Code from the class i wrote

    def postprocess_op(self,request_id = None):
        """
        use request id 
        prediction only after completing one complete buffer
        return frame, attribute(if available) and output
        """
        if request_id is None:
            request_id = self.cursor_id
        # if (self.exec_net.requests[request_id].wait(-1) == 0 ):
        while self.exec_net.requests[request_id].wait()!=0:
            time.sleep(.05)
        print(self.exec_net.requests[request_id].wait())
        self.output = [self.exec_net.requests[request_id].outputs[node] for node in self.out_blob]
        op_frame = self.frames_buffer[request_id]
        attr = self.attrs[request_id]

        return op_frame,attr,self.output

this is the output i got.....

Screenshot from 2020-02-08 10-35-14.png

 

please help me understand how i can get rid of request_busy
how to check if infer request succeeded.

0 Kudos
11 Replies
JAIVIN_J_Intel
Employee
4,209 Views

Hi Prateek,

Are you swapping the current and next InferRequest(request_id) for every iteration, in the async mode?

Please refer this documentation which demonstrates how the Async API works.

Also, refer the object_detection_demo_ssd_async, to understand the implementation of Async mode.

Hope this helps.

Regards,

Jaivin

0 Kudos
andreas_sheron
4,168 Views

I followed  python sample on object_detection_demo_ssd_async on the master branch and have understood how current and next requests are being swapped which helps increase the overall FPS. You have also mentioned about swapping requests here.

In the sample, num_requests=2 has been used. One of the threads on forum suggests that we can use more than 2 requests. Also, inie_api.InferRequest Class Reference num_requests=4 has been taken. 


I have a few questions:

1. Is there any example that demonstrates efficient use of more than 2 requests? 

2. Should we be using more than 2 requests? If so, how would we then cycle through all the requests ensuring that we can make use of `request.wait()` properly?

3. I can implement a method that can cycle through a list of requests, but after processing each of the requests, do I need to use request.wait(-1)? 

0 Kudos
andreas_sheron
4,129 Views

Well, with the resources I shared, I was able to understand how to implement for 'n' number of requests and the program works perfectly now.

0 Kudos
pkhan10
New Contributor I
4,209 Views

hey jaivin 
I am not swapping the request id
i create execution net with max_request = n 
first i load n data point into buffer
and then I request them one by one

0 Kudos
JAIVIN_J_Intel
Employee
4,209 Views

Hi Prateek,

Can you provide all the necessary files and inputs to reproduce this issue from our end? I can send you a PM so that you can privately send me a *.zip file.

Regards,

Jaivin

0 Kudos
pkhan10
New Contributor I
4,209 Views

hello jalvin,
Can we have skype/hangout/zoom call..
Code is in multiple classes ..
i can show you...
but zipping it for you to get reproducible results will be hard. 

0 Kudos
pkhan10
New Contributor I
4,209 Views

hello jalvin,

I am using both intel trained model and external trained model( tensorflow object detection api ,ssd_inception_v2_coco)
I am getting request busy in the external  trained model. I am attaching pipeline config and model conversion command.
Can you please check if I am doing anything wrong while converting model  

please download pipeline config path from here

python mo.py --input_model /media/prateek/prateek_space/model_files/openvino_model/external_data_hat_person_ssd_inception_v2_coco_2018_01_28/inference_graph/frozen_inference_graph.pb -o /media/prateek/prateek_space/model_files/openvino_model/external_data_hat_person_ssd_inception_v2_coco_2018_01_28/converted/iter1/ --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config /media/prateek/prateek_space/model_files/openvino_model/external_data_hat_person_ssd_inception_v2_coco_2018_01_28/inference_graph/pipeline.config --reverse_input_channels

 

0 Kudos
pkhan10
New Contributor I
4,209 Views

hello one more input
While converting model I was getting this error

 Cannot infer shapes or values for node "Postprocessor/Cast_1".

so i update ssd_support_api_v1.14.json as given in one of the answers in forum

"cast" to "cast_1"

            "start_points": [
                "Postprocessor/Shape",
                "Postprocessor/scale_logits",
                "Postprocessor/Tile",
                "Postprocessor/Reshape_1",
                "Postprocessor/Cast_1"
            ]


 

0 Kudos
JAIVIN_J_Intel
Employee
4,209 Views

I have sent you a private message. You can use it to share the required details to setup the call, as you have suggested.

Regards,

Jaivin

0 Kudos
pkhan10
New Contributor I
4,209 Views

hello jalvin
I haven't recieved any message

But I was able to resolve the issue
There's two part where i need to apply wait where ..
 

i am applying async_infer and requesting output separately in different function.
Thus when a request not being processed, asking for another async infer was creating REQUEST_BUSY error
Thanks for your support
 

0 Kudos
JAIVIN_J_Intel
Employee
4,209 Views

Hi Prateek,

Thanks for letting us know that you were able to solve the issue.

Feel free to ask if you have any other questions.

Regards,

Jaivin

0 Kudos
Reply