Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
22 Views

Failed Loading net with Myriad Plugin

Hi,

I'm trying to load my  tensorflow network with the myriad plugin, it works all fine with CPU and GPU,
trying to work with myriad it shows the following error

RuntimeError: AssertionFailed: _allocatedIntermData.count(topParent) > 0

trying to investigate i found that the output nodes of my model are:

{'2736/Split.1': <openvino.inference_engine.ie_api.OutputInfo object at 0x7f7a03aa7990>, 'model/resize_images/ResizeBilinear': <openvino.inference_engine.ie_api.OutputInfo object at 0x7f7a03aa7a08>}

the output '2736/Split.1' is created by the model optimizer and it's not in my graph (it's a conversion of a layer slice).

i noticed in the xml file created by the model optimizer refered to the specific layer is:

<layer id="82" name="2736/Split" precision="FP16" type="Split">
            <data axis="1"/>
            <input>
                <port id="0">
                    <dim>1</dim>
                    <dim>8</dim>
                    <dim>128</dim>
                    <dim>256</dim>
                </port>
            </input>
            <output>
                <port id="1">
                    <dim>1</dim>
                    <dim>2</dim>
                    <dim>128</dim>
                    <dim>256</dim>
                </port>
                <port id="2">
                    <dim>1</dim>
                    <dim>6</dim>
                    <dim>128</dim>
                    <dim>256</dim>
                </port>
            </output>
        </layer>

 

the second output is generated but not used by anyone and i think that this output is the error.
 <port id="2">
                    <dim>1</dim>
                    <dim>6</dim>
                    <dim>128</dim>
                    <dim>256</dim>

now do you know how i can fix this error?

i attached my network

0 Kudos
3 Replies
Highlighted
Moderator
22 Views

Hi Marco,

Thanks for reaching out. It's possible the model you are using is not supported by the Myriad Plugin. The operation Slice in Tensorflow will be shown as Split on the Intermediate Representation.

Could you share additional details about the model?

  • What framework is the model based on?
  • What topology is the model based on?
  • Is it a pre-trained or custom trained model?
  • If you custom trained, what base model did you use?
  • What is the full model optimizer command that you used?
  • Could you share the model? Let me know if you would like to share it private and I will send you a private message.

Regards,

Jesus

0 Kudos
Highlighted
Beginner
22 Views

Hi,

The model is a tensorflow custom trained model, is a depth estimation model.
this is the command used. python3 '/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py' --input_model '/home/marco/frozen_modelsSPLIT/frozen_pydnet.pb'    --model_name "IRPydnetDB" --data_type half     --output 'model/resize_images/ResizeBilinear' --log_level=DEBUG

I made some experiments to check the error: i used the hetero plugin (CPU, MYRIAD) to test if is too heavy or there are not supported layers and it's not... all layers are supported.

the one not accepted  at runtime is this output '2736/Split.1': <openvino.inference_engine.ie_api.OutputInfo object at 0x7f7a03aa7990>'

analyzing the model optimizer i saw this log:

[ 2019-11-22 00:00:42,021 ] [ DEBUG ] [ eliminate:67 ]  The following nodes are seeded as output reachable:
fake_data_2736/sink_port_0

and using the hetero plugin one of the error that gave to me is: RuntimeError: AssertionFailed: !onlyUsedOutputs.empty()

So, i think that the tensorflow operation  tf.slice that has as input [1,128,256,8] and output [1,128,256,2] has been converted by the model optimizer in the operation Split input [1,8,128,256] and output [1,2,128,256] but as you can check in the xml shared before it has also another output [1,6,128,256] that i think generates the problem because i think is seen as an fakeoutput not accepted by myriad. is there a way to get rid of this fake output?.

i attach you the protobuffer of my tensorflow model and the . and i hope that you will help me to fix this problem.

0 Kudos
Highlighted
Beginner
22 Views

My situation is similar to yours,There are four outputs:1715/Split.0, 1715/Split.2, 1725/Split.1, cross_entropy/logits,only "cross_entropy/logits" is needed..

$ python3 /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/samples/python_samples/cross_check_tool/cross_check_tool.py  --model inference_graph.xml -ref_d CPU  -ref_m  inference_graph.xml  -d MYRIAD
[ INFO ]  No input was provided by --input/-i. Generate input from noise
Inference Engine:
          API version ............ 2.1.custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05
[ INFO ]  Cross check with one IR was enabled
[ INFO ]  MYRIAD:FP16 vs CPU:FP16
[ INFO ]  The same IR on both devices: inference_graph.xml
[ INFO ]  1 input detected: X
[ INFO ]  Statistics will be dumped for 4 layers: 1715/Split.0, 1715/Split.2, 1725/Split.1, cross_entropy/logits

0 Kudos