Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Dynamic Batching with Face Detection Model

Selvakumar__Chandrak
687 Views

Hi,

I tried executing the dynamic_batch_demo.py code with the face-detection-retail-0004 model on CPU. I get following error.

[ INFO ] Loading network files:
        C:\Intel\computer_vision_sdk\deployment_tools\intel_models\face-detection-retail-0004\FP32\face-detection-retail-0004.xml
        C:\Intel\computer_vision_sdk\deployment_tools\intel_models\face-detection-retail-0004\FP32\face-detection-retail-0004.bin
No unsupported_layers
[ INFO ] Preparing input blobs
[ WARNING ] Image D:\FDTests\FD_Case\10mfaces\66.jpg is resized from (1080, 1920) to (300, 300)
[ INFO ] Batch size is 2
[ INFO ] Loading model to the plugin
Traceback (most recent call last):
  File "dynamic_batch_demo.py", line 146, in <module>
    sys.exit(main() or 0)
  File "dynamic_batch_demo.py", line 109, in main
    exec_net = plugin.load(network=net)
  File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: MKLDNNGraph::CreateGraph: such topology cannot be compiled for dynamic batch!

When -mb option is set to 1 the code works. When -mb is set to 2, i get the above error.

If I do not add the -l option it throws out a list of unsupported layer as error as documented in https://docs.openvinotoolkit.org/latest/_docs_IE_DG_DynamicBatching.html. But with -l option that goes away and says 'no unsupported layers'. 

Does this model support batching? 

Also, in dynamic_batching_demo.py, documentation says -i option can be either folder or file. I don't think if a path to folder is given the code works. 

0 Kudos
4 Replies
Shubha_R_Intel
Employee
687 Views

Dear Selvakumar, Chandrakanth

 Thank you for pointing out this problem, it's likely a bug (which i will file on your behalf):

Also, in dynamic_batching_demo.py, documentation says -i option can be either folder or file. I don't think if a path to folder is given the code works. 

By setting -mb to 1 you are essentially disabling dynamic batching. Based on the errors spewed out when you run dynamic_batching_demo.py it certainly seems like there are layers in the Face Detection Model which are not supported by Dynamic Batching. The way to convince yourself is to inspect face-detection-retail-0004.xml (it's a text file) and compare layers to the prohibited layers in the online doc you referenced above. My hunch is that you'll find unsupported layers.

Hope it helps-

Thanks for using OpenVino !

Shubha

0 Kudos
Selvakumar__Chandrak
687 Views

I realize that there may be unsupported layers. My confusion is when I add the path to CPU extension file with the -l option, it specifically throws a message 'No unsupported_layers'. This message is misleading. 

0 Kudos
Shubha_R_Intel
Employee
687 Views

Dear Selvakumar, Chandrakanth,

It is a weird and misleading error message. Agreed. Let me reproduce and file a bug on your behalf.

Thanks for using OpenVino !

Shubha

0 Kudos
Vishnu_T
Novice
667 Views

I convert my own model to intermediate representation(xml and bin).While converting my tensorflow model to model optimizer intermediate representation, I configure the input shape as [1,64,64,3].

If I didn't configure the Input shape as [1,64,64,3] by default batch size value as -1 and input shape value as [-1,64,64,3].

I successfully converted the tensorflow model to intermediate representation and infer the model.Now I have to use this model for dynamic batch input.

As per the link  https://docs.openvinotoolkit.org/2019_R1/_inference_engine_ie_bridges_python_sample_dynamic_batch_demo_README.html I followed the steps. Still I am facing the same issue.

In my model, I am using the below mentioned layers

  1. Conv2D
  2. Activation
  3. BatchNormalization
  4. GlobalAveragePooling2D
  5. SeparableConv2D
  6. MaxPooling2D
  7. Input

While convert from tensorflow model to model optimizer intermediate representation we have to fix any configuration parameter for dynamic batch input.  

0 Kudos
Reply