Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

How to perform inference on a batch

Pugach__Yaroslav
Beginner
2,102 Views

Hello

I am trying to perform object detection on a batch of frames using SSD Inception v2 and Faster RCNN TensorFlow models (converted to IR). Inference works only for the first frame, but for other frames in the batch it never detects anything (result is always a tensor of zeros). 

plugin = IECore()
net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)

# Set max batch size
net.batch = 2

exec_net = plugin.load(network=net)

# batch contains two frames
input_dict = { 'image_tensor' : batch }

request = self.exec_network.start_async(
            request_id=0,            
            inputs=input_dict)

infer_status = request.wait(-1)
out = request.outputs['DetectionOutput']
print('output:', out)

 

So I am wondering, perhaps, I am missing something? What is a correct way to set the batch size? Is it enough to set net.batch = N before loading the model or are there other things to be done? 

Just in case, when I was converting the models to IR, I didn't specify the --batch parameter. Not sure if it's necessary. 

I am also restricted to use OpenVINO 2019.R3 version. 

0 Kudos
5 Replies
SIRIGIRI_V_Intel
Employee
2,102 Views

Hi Yaroslav,

You need to set the batch size while converting the model using the model optimizer. Here is the thread you can refer to.

Please refer this documentation for the dynamic batching.

Regards,

Ram prasad

0 Kudos
Pugach__Yaroslav
Beginner
2,102 Views

I am still confused by the meaning of the IENetwork's batch_size attribute. For example, if I set the batch size to 4 via the --batch parameter of the model optimizer, the model expects inputs as [4, 3, 600, 600], however, IENetwork still retains the default value of batch_size equal to 1. If I change it manually to 4, the expected shape of the input becomes [16, 3, 600, 600]. 

 

print('input shape before changing batch_size:', network.inputs['image_tensor'].shape)
print('batch_size:', network.batch_size)

# Try setting the batch_size of IENetwork to 4
network.batch_size = 4
 
print('input shape after changing batch_size:', network.inputs['image_tensor'].shape)
#print('new output shape:', network.outputs[output_blob].shape)

 

So it looks like when the --batch parameter is specified for the model optimizer, IENetwork's batch_size should be left intact. If we don't specify the batch size while converting the model to IR, setting IENetwork's batch_size doesn't help. So what's the purpose of this attribute then?

0 Kudos
SIRIGIRI_V_Intel
Employee
2,102 Views

Are you using the api mentioned in the documentation. If not, it is recommended to use the api mentioned in the documentation.

If you are still facing the issue, Could you share the necessary files with us to have a look.

Regards,

Ram prasad

0 Kudos
Pugach__Yaroslav
Beginner
2,102 Views

I am restricted to use Python API, OpenVINO 2019.R3.  

0 Kudos
SIRIGIRI_V_Intel
Employee
2,102 Views

Please refer the 3d_segmentation_demo for python.

Could you please try with the latest OpenVINO version(2020.2) and let us know the results.

Hope this helps.

Regards,

Ram prasad

0 Kudos
Reply