Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

OpenVINO - image batch to blob

Albi_KA
Beginner
529 Views

I use the OpenVINO InferenceEngine to do an inference.

If I have one image I can use

cv::Mat image_01 = cv::imread(path/to/image_01);
InferenceEngine::Blob::Ptr imgBlob = wrapMat2Blob(image_01)

to get a blob for the image. 

But I have two images:

cv::Mat image_01 = cv::imread(path/to/image_01);
cv::Mat image_02 = cv::imread(path/to/image_02);

How can I create a batch blob? Do I need a blob or inputs? There are the following two methods:

inferRequest.SetBlob(input_name, imgBlob);
inferRequest.SetInputs(...)

 

 What are the parameters for SetInputs(...)?

0 Kudos
1 Solution
Vladimir_Dudnik
Employee
426 Views

@Albi_KA please look through Using Dynamic Batching article in OpenVINO online documentation for more details

View solution in original post

6 Replies
Zulkifli_Intel
Moderator
485 Views

Hello Bianca Lamm,

 

Please share with us your model, images, and code to reproduce the issue.

 

The parameter for SetInput() can be found here.

 

Regards,

Zulkifli


Albi_KA
Beginner
482 Views

Hello Zulkifli,

 

unfortunately, I am not allowed to share my model and the images. I looked at the documenation of SetInput(...) but I do not know how to use it. Can you give me an example for SetInput(...) with two cv::Mat variables?

Vladimir_Dudnik
Employee
470 Views

I'd recommend you to review Using Shape Inference article from OpenVINO online documentation to be aware of limitations of using batches. It also refers to Open Model Zoo smart_classroom_demo, where dynamic batching is used in processing multiple previously detected faces. Basically, when you have batch enabled in model, the memory buffer of your input blob will be allocated to have a room for all batch of images, and your responsibility is to fill data in input blob for each image in batch from your data. You may take a look at function CnnDLSDKBase::InferBatch, of smart_classroom_demo, which is located at file smart_classroom_demo/cpp/src/cnn.cpp, line 51. As you can see, in the loop over num_imgs an auxiliary function matU8ToBlob fills the input blob with data for current_batch_size of images, then set batch size for infer request and run inference.

    for (size_t batch_i = 0; batch_i < num_imgs; batch_i += batch_size) {
        const size_t current_batch_size = std::min(batch_size, num_imgs - batch_i);
        for (size_t b = 0; b < current_batch_size; b++) {
            matU8ToBlob<uint8_t>(frames[batch_i + b], input, b);
        }

        if (config_.max_batch_size != 1)
            infer_request_.SetBatch(current_batch_size);
        infer_request_.Infer();
.....

 

Albi_KA
Beginner
434 Views

I tried to use the following:

 

...
const
std::map<std::string, std::string> dyn_config = { { InferenceEngine::PluginConfigParams::KEY_DYN_BATCH_ENABLED, InferenceEngine::PluginConfigParams::YES } };
network.setBatchSize(2)
...
InferenceEngine::Blob::Ptr input = infer_request.GetBlob(input_name);
matU8ToBlob<uint8_t>(image_01, input, 0);
matU8ToBlob<uint8_t>(image_02, input, 1);
infer_request.SetBatch(2);
infer_request.Infer();
infer_request.SetBatch(2);

By calling the setBatch- command I received the exception:

Exception at 0x7ffae9803b19, code: 0xe06d7363: C++ exception, flags=0x1 (execution cannot be continued) (first chance) at C:\Program Files (x86)\Intel\openvino_2021.3.394\inference_engine\include\details\ie_exception_conversion.hpp:66
Dynamic batch is not enabled.

 

Although I created my model like this:

python mo_tf.py --saved_model_dir path/to/model/dir --input_shape [2,224,224,3]

Vladimir_Dudnik
Employee
427 Views

@Albi_KA please look through Using Dynamic Batching article in OpenVINO online documentation for more details

Zulkifli_Intel
Moderator
411 Views

Hello Bianca Lamm,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


Reply