Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6401 Discussions

Queuing multiple inferences in NCSDK v2

idata
Employee
688 Views

Hi,

 

I'm trying to queue multiple images for inferencing on NCS with NCSDK v2 APIs. Here, I'm assuming that multiple images are processed in a batch and returns output. Presently images are being sent for inferencing in a loop sequentially like this:

 

for img in range(list_img):

 

graph.queue_inference_with_fifo_elem(fifoIn, fifoOut, img, 'user object')

 

My concern is:

 

     

  • Can you please confirm, is there a better way for using v2 APIs for batch processing?
  •  

  • I'm assuming that, if a single image takes 500ms for inferencing and if we submit 5 images in a queue, the inferencing should still happen in ~500-600ms (since it is batch processing). Is my assumption correct?
  •  

 

Please provide any comments or suggestions on this regard.

 

Thanks

 

Madhusudhan S
0 Kudos
2 Replies
idata
Employee
415 Views

@madhusudhan_s It's a queue, so if you send in five images, it processes each image, one at a time. We don't support batch image processing at the moment. I don't know of any work-arounds with APi2 because one device can only process one inference at a time with the current NCSDK 2.04.00.06.

0 Kudos
idata
Employee
415 Views

Thanks for the clarification @Tome_at_Intel !!

0 Kudos
Reply