Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Custom IR Model throwing error for MYRIAD

Ferraro__Giampaolo
394 Views

The model loads fine when using CPU but for MYRIAD i get the following error:
How do i get more info about what this even means? Any help is appreciated thanks

  INFO Creating Inference Engine
   INFO Loading network files:
    inference_graph.xml
    inference_graph.bin
   INFO Preparing input blobs
   INFO Batch size is [1, 48, 1, 5, 48]
   INFO Loading model to the plugin
Traceback (most recent call last):
  File "neural.py", line 55, in <module>
    main()
  File "neural.py", line 38, in main
    exec_net = ie.load_network(network=net, device_name=device)
  File "ie_api.pyx", line 85, in openvino.inference_engine.ie_api.IECore.load_network
  File "ie_api.pyx", line 92, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: AssertionFailed: _allocatedIntermData.count(topParent) > 0
 

0 Kudos
3 Replies
JAIVIN_J_Intel
Employee
394 Views

Hi Giampaolo,

Thanks for reaching out. It's possible that the model you are using is not supported by the Myriad Plugin.

Please provide details about the following:

  • Which version of OpenVINO are you using?
  • What framework and topology is the model based on?
  • What is the model optimizer command that you used?
  • Could you share the original model as well?

Regards,

Jaivin

0 Kudos
Ferraro__Giampaolo
394 Views

Hey version info is:
[ INFO ] InferenceEngine:
         API version............. 2.1.custom_releases/2019/R3_cb6cad9663aea3d282e0e8b3e0bf359df665d5d0
[ INFO ] Device info
         MYRIAD
         myriadPlugin............ version 2.1
         Build................... 30677

The model is based of a convolution neural network, I didn't creatae the model, but have recently started working with it

Model optimizer command is

mo_tf.py --input_model inference_graph.pb --input X --input_shape [1,1,5,48,48]

 

As far as the original model, it is a tensorflow model. what files would you like? the frozen .pb files? The source file is here https://github.com/TheCacophonyProject/classifier-pipeline/blob/master/model_crnn.py#L441 with LSTM removed.

Cheers,
GP

 

0 Kudos
JAIVIN_J_Intel
Employee
394 Views

Hi Giampaolo,

The supported model format for MYRIAD plugin is FP16. Please add --data_type FP16 along with the mentioned model optimizer command.

Also, verify that you are using the supported networks for MYRIAD plugin. 

If the issue still persists, please share the frozen model(.pb) and other necessary files to reproduce the issue.

Regards,

Jaivin

0 Kudos
Reply