Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Inference works on CPU but not on NCS 2

Polgar__Peter
Beginner
785 Views

Hi,

There is a custom trained custom network that was exported to .onnx format. The Model Optimizer can compile that onnx file with success, so it generates a .bin., a .xml and a .mapping files. When I try to do interfence with the .xml file using the Openvino's built-in benchmark_app on CPU, it works, but when I try the same process on an NCS 2 device, it produces error message. Here are the used commands and its outputs:

First, compile the onnx file:

ppolgar@comp:/opt/intel/openvino/deployment_tools/model_optimizer$ python3 mo_onnx.py --input_model ~/example.onnx --data_type=FP16 --output_dir ~/try_onnx/
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/ppolgar/example.onnx
	- Path for generated IR: 	/home/ppolgar/try_onnx/
	- IR output name: 	example
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
ONNX specific parameters:
Model Optimizer version: 	2020.2.0-60-g0bc66e26ff

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ppolgar/try_onnx/example.xml
[ SUCCESS ] BIN file: /home/ppolgar/try_onnx/example.bin
[ SUCCESS ] Total execution time: 0.87 seconds. 
[ SUCCESS ] Memory consumed: 88 MB.

Then do interference on an NCS2 device:

ppolgar@comp:~/inference_engine_cpp_samples_build/intel64/Release$ ./benchmark_app -m ~/try_onnx/example.xml -d MYRIAD.1.4-ma2480
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.

[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine: 
	API version ............ 2.1
	Build .................. 42025
	Description ....... API
[ INFO ] Device info: 
	MYRIAD
	myriadPlugin version ......... 2.1
	Build ........... 42025

[Step 3/11] Setting device configuration
[Step 4/11] Reading the Intermediate Representation network
[ INFO ] Loading network files
[ INFO ] Read network took 3.58 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ ERROR ] Failed to compile layer "_2__Lstm/LSTMCell_sequence": AssertionFailed: outputs.size() == 1

It seems a bug.

The used onnx, xml, bin and mapping files are attached to this post.

Sorry for my english. Thanks in advance.

Peter

0 Kudos
1 Solution
JAIVIN_J_Intel
Employee
785 Views

Hi Peter,

It seems like the layer is not supported by MYRIAD plugin.

Could you try using the Heterogeneous Plugin to execute not supported layers on fallback devices like CPU.

For example: -d HETERO:MYRIAD,CPU

Regards,

Jaivin

View solution in original post

0 Kudos
1 Reply
JAIVIN_J_Intel
Employee
786 Views

Hi Peter,

It seems like the layer is not supported by MYRIAD plugin.

Could you try using the Heterogeneous Plugin to execute not supported layers on fallback devices like CPU.

For example: -d HETERO:MYRIAD,CPU

Regards,

Jaivin

0 Kudos
Reply