Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6253 Discussions

Inference Engine Runtime Error on NCS2 vs. CPU


Dear all,

I am having issues deploying a frozen graph, successfully that converted into IR model (xml, bin), to the NCS2.

Running on Windows 10, Python 3.6.5, using the OpenVINO Python API. The sample code is minimal (skipping non relevant parts):

from openvino.inference_engine import IENetwork, IEPlugin

def main():
    #######################  Device  Initialization  ########################
    #  Plugin initialization for specified device and load extensions library if specified
    plugin = IEPlugin(device="MYRIAD")
    # plugin = IEPlugin(device="GPU")
    #  Read in Graph file (IR)
    net = IENetwork(model="sample16.xml", weights="sample16.bin")

    print(net.outputs)"Preparing input blobs")
    input_blob = next(iter(net.inputs))
    out_blob = next(iter(net.outputs))

    # Read and pre-process input images

    #  Load network to the plugin
    exec_net = plugin.load(network=net)
    ### END OF SAMPLE ###

When I call plugin.load with the same network description (FP16 converted with the Model Optimizer successsfully), I get different responses.

Using the GPU plugin (CPU also runs fine with FP32 version of same protobuf file):

GPU Plugin

{'add_2/add': <openvino.inference_engine.ie_api.OutputInfo object at 0x00000193935EDB70>, 'lambda_2/strided_slice/Split.1': <openvino.inference_engine.ie_api.OutputInfo object at 0x00000193935EDBE8>}
[1, 3, 240, 320]
[1, 1, 240, 320]

Runs successfully up to loading the network.


Using the MYRIAD plugin:

NCS2 Plugin

{'add_2/add': <openvino.inference_engine.ie_api.OutputInfo object at 0x000002850F48DB70>, 'lambda_2/strided_slice/Split.1': <openvino.inference_engine.ie_api.OutputInfo object at 0x000002850F48DBE8>}
[1, 3, 240, 320]
[1, 1, 240, 320]
Traceback (most recent call last):
  File "", line 86, in <module>
    sys.exit(main() or 0)
  File "", line 46, in main
    exec_net = plugin.load(network=net)
  File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: AssertionFailed: output->desc().dimsOrder() == inDesc.dimsOrder()

There is something up with the dimensions or their ordering, I suppose, but I have checked above the input and output dimensions and they are quite correct. Also, that does not explain why the network loads on CPU and GPU without problems, while MYRIAD returns a runtime error. How can I debug this issue?

0 Kudos
3 Replies


I have the same problem as you. Can you solve this problem?


E: [xLink] [    158539] dispatcherEventSend:924    Write failed header -7 | event XLINK_WRITE_REQ

E: [xLink] [    158539] eventSchedulerRun:584    Event sending failed
E: [xLink] [    159902] dispatcherEventReceive:347    dispatcherEventReceive() Read failed -1 | event 0x7ff30fffeee0 XLINK_WRITE_REQ

E: [xLink] [    159902] eventReader:233    eventReader stopped
E: [watchdog] [    159903] sendPingMessage:132    Failed send ping message: X_LINK_ERROR
E: [xLink] [    159903] XLinkReadDataWithTimeOut:1343    Event data is invalid
E: [ncAPI] [    159903] ncGraphAllocate:1828    Can't read output tensor descriptors of the graph, rc: X_LINK_ERROR
Traceback (most recent call last):
  File "", line 293, in <module>
    sys.exit(main() or 0)
  File "", line 204, in main
    exec_net = plugin.load(network=net)
  File "ie_api.pyx", line 395, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 406, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: Failed to allocate graph: NC_ERROR

0 Kudos

Dear Vici,

First, you didn't say much about your model. Is it a custom model ? Please make sure that your layers are supported:

If your layers are supported, then it could be an OpenVino bug. Rather than go through all the trouble to debug, rather I'd prefer you attach a zip file with your model to this ticket and let me investigate it. If you'd rather not publicize your model let me know and I will PM you so that you can send it to me privately.

You can debug it as follows. 

Download the OpenVino dldt opensource:

Build a debug version of Inference Engine, by following this README:

Build a debug version of the VPU plugin which has been completely open-sourced too by studying this:

Please make sure to regenerate your IR using DLDT Open Source too (don't use your existing IR).

By stepping into IE and VPU source code, you will narrow down the issue.




0 Kudos

I have the same issue, i used the hetero plugin to know which layer  gives me the error and is this one:

<layer id="84" name="2848/Split" precision="FP16" type="Split">
            <data axis="1"/>
                <port id="0">
                <port id="1">
                <port id="2">

this layer is generated automatically by the model optimizer and when i assign affinity (MYRIAD) to this layer it gives me this error:
RuntimeError: AssertionFailed: !onlyUsedOutputs.empty()

and if i use only MYRIAD PLUGIN it gaves me this error:
RuntimeError: AssertionFailed: _allocatedIntermData.count(topParent) > 0

i also checked if it's supported and is supported can you give me some tips to understand how i can investigate the problem?

0 Kudos