- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I tried executing the dynamic_batch_demo.py code with the face-detection-retail-0004 model on CPU. I get following error.
[ INFO ] Loading network files:
C:\Intel\computer_vision_sdk\deployment_tools\intel_models\face-detection-retail-0004\FP32\face-detection-retail-0004.xml
C:\Intel\computer_vision_sdk\deployment_tools\intel_models\face-detection-retail-0004\FP32\face-detection-retail-0004.bin
No unsupported_layers
[ INFO ] Preparing input blobs
[ WARNING ] Image D:\FDTests\FD_Case\10mfaces\66.jpg is resized from (1080, 1920) to (300, 300)
[ INFO ] Batch size is 2
[ INFO ] Loading model to the plugin
Traceback (most recent call last):
File "dynamic_batch_demo.py", line 146, in <module>
sys.exit(main() or 0)
File "dynamic_batch_demo.py", line 109, in main
exec_net = plugin.load(network=net)
File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load
File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: MKLDNNGraph::CreateGraph: such topology cannot be compiled for dynamic batch!
When -mb option is set to 1 the code works. When -mb is set to 2, i get the above error.
If I do not add the -l option it throws out a list of unsupported layer as error as documented in https://docs.openvinotoolkit.org/latest/_docs_IE_DG_DynamicBatching.html. But with -l option that goes away and says 'no unsupported layers'.
Does this model support batching?
Also, in dynamic_batching_demo.py, documentation says -i option can be either folder or file. I don't think if a path to folder is given the code works.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Selvakumar, Chandrakanth
Thank you for pointing out this problem, it's likely a bug (which i will file on your behalf):
Also, in dynamic_batching_demo.py, documentation says -i option can be either folder or file. I don't think if a path to folder is given the code works.
By setting -mb to 1 you are essentially disabling dynamic batching. Based on the errors spewed out when you run dynamic_batching_demo.py it certainly seems like there are layers in the Face Detection Model which are not supported by Dynamic Batching. The way to convince yourself is to inspect face-detection-retail-0004.xml (it's a text file) and compare layers to the prohibited layers in the online doc you referenced above. My hunch is that you'll find unsupported layers.
Hope it helps-
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I realize that there may be unsupported layers. My confusion is when I add the path to CPU extension file with the -l option, it specifically throws a message 'No unsupported_layers'. This message is misleading.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Selvakumar, Chandrakanth,
It is a weird and misleading error message. Agreed. Let me reproduce and file a bug on your behalf.
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I convert my own model to intermediate representation(xml and bin).While converting my tensorflow model to model optimizer intermediate representation, I configure the input shape as [1,64,64,3].
If I didn't configure the Input shape as [1,64,64,3] by default batch size value as -1 and input shape value as [-1,64,64,3].
I successfully converted the tensorflow model to intermediate representation and infer the model.Now I have to use this model for dynamic batch input.
As per the link https://docs.openvinotoolkit.org/2019_R1/_inference_engine_ie_bridges_python_sample_dynamic_batch_demo_README.html I followed the steps. Still I am facing the same issue.
In my model, I am using the below mentioned layers
- Conv2D
- Activation
- BatchNormalization
- GlobalAveragePooling2D
- SeparableConv2D
- MaxPooling2D
- Input
While convert from tensorflow model to model optimizer intermediate representation we have to fix any configuration parameter for dynamic batch input.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page