Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Custom IR tensorflow model cannot run with MYRIAD 2

赵__明
Beginner
2,030 Views

Hello, 

I trained a FCN model using tensorflow1.10.1 and frozen it to inference_graph.pb,then I converted it use the  openvino model optimizer( inference_graph.bin and inference_graph.xml),the model can work with CPU device using the inference_engine,but when changed to MYRIAD,the inference engine will failed and the error message is :

File "ie_api.pyx", line 85, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 92, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: AssertionFailed: _allocatedIntermData.count(topParent) > 0

I searched on the forum and find https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/824981 has the similar problem but there was no solution.And I tried the newest debug version(https://github.com/opencv/dldt),too,but the same problem.So I pasted my error messages and model here,hope someone can kindly help me..

The frozen model(inference_graph.pb) can be converted to IR successsfully:

$ python3 ../../model-optimizer/mo_tf.py  --input_model  inference_graph.pb --input_shape [200,3000,1,3] --reverse_input_channels
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/zm/dldt/inference-engine/build/inference_graph.pb
    - Path for generated IR:     /home/zm/dldt/inference-engine/build/.
    - IR output name:     inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [200,3000,1,3]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     unknown version
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/zm/dldt/inference-engine/build/./inference_graph.xml
[ SUCCESS ] BIN file: /home/zm/dldt/inference-engine/build/./inference_graph.bin
[ SUCCESS ] Total execution time: 5.81 seconds. 

The IR model can not run with MYRIAD (but can run with CPU):

$ python3 /home/zm/intel/openvino_2019.3.376/deployment_tools/tools/benchmark_tool/benchmark_app.py -m inference_graph.xml -d MYRIAD
[Step 1/11] Parsing and validating input arguments
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05
[ INFO ] Device info
MYRIAD
myriadPlugin............ version 2.1
Build................... 32974

[Step 3/11] Reading the Intermediate Representation network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision: FP16
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[Info ][VPU][MyriadPlugin] Device #0 MYRIAD-X (USB protocol) allocated
[ ERROR ] AssertionFailed: _allocatedIntermData.count(topParent) > 0
Traceback (most recent call last):
File "/home/zm/intel/openvino_2019.3.376/deployment_tools/tools/benchmark_tool/benchmark_app.py", line 82, in main
exe_network = benchmark.load_network(ie_network, perf_counts, args.number_infer_requests)
File "/opt/intel/openvino_2019.3.376/python/python3.6/openvino/tools/benchmark/benchmark.py", line 127, in load_network
num_requests=number_infer_requests or 0)
File "ie_api.pyx", line 85, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 92, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: AssertionFailed: _allocatedIntermData.count(topParent) > 0

0 Kudos
1 Solution
David_C_Intel
Employee
2,030 Views

Hi  赵, 明,

Thank you for your patience.

Could you please try the following:

  • Try to turn off the VPU_HW_STAGES_OPTIMIZATION and re-test.
  • Add the following after declaring IECore() line 113:
if args.device == "MYRIAD":
    ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")

Regards, 

David

View solution in original post

0 Kudos
15 Replies
赵__明
Beginner
2,030 Views

I rectified the above model and now the model can be successfully loaded,however  with benchmark_app it will fail at the 10th step:

$ /home/zm/dldt/inference-engine/bin/intel64/Release/benchmark_app -m FP16/inference_graph.xml -d MYRIAD
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.

[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. custom_releases/2019/R3_ac8584cb714a697a12f1f30b7a3b78a5b9ac5e05
Description ....... API
[ INFO ] Device info:
MYRIAD
myriadPlugin version ......... 2.1
Build ........... 32974

[Step 3/11] Reading the Intermediate Representation network
[ INFO ] Loading network files
[ INFO ] Read network took 3.50 ms
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 200, precision: MIXED
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 1595.37 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'X' precision U8, dimensions (NCHW): 200 3 3072 1
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'X' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'X' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'X' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'X' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests, limits: 60000 ms duration)
E: [xLink] [ 663113] [EventRead00Thr] eventReader:218 eventReader thread stopped (err -4)
E: [xLink] [ 663113] [Scheduler00Thr] eventSchedulerRun:576 Dispatcher received NULL event!
E: [global] [ 663113] [benchmark_app] XLinkReadDataWithTimeOut:1494 Event data is invalid
E: [ncAPI] [ 663113] [benchmark_app] checkGraphMonitorResponse:1792 XLink error, rc: X_LINK_ERROR
E: [ncAPI] [ 663113] [benchmark_app] ncGraphQueueInference:3979 Can't get trigger response
E: [watchdog] [ 663788] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 664787] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 665786] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 666785] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 667784] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 668783] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 669782] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 670781] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 671781] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 672780] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 673779] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 674778] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 675777] [WatchdogThread] sendPingMessage:121 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 675777] [WatchdogThread] watchdog_routine:315 [0x55ee881194e0] device, not respond, removing from watchdog

It has the same error when directly connected to raspberry pi 3B+.

 

0 Kudos
David_C_Intel
Employee
2,030 Views

Hello  赵, 明,

Thank you for reaching out.

We are going to look into it. Since your model runs on the CPU, but not on the NCS2, there might be an issue when converting to IR format.

Could you please answer the following:

 - Could you confirm if the command you used to convert your model to IR is the one detailed above? 

 - Which OS version are you using?

 

Regards, 

David

0 Kudos
赵__明
Beginner
2,030 Views

Thanks for your reply,Now everything was solved on my side, but I think there are still something you can improve:

1.The NCS2 cannot support too large dimension of the input data,I guess,not over 2048 based my test.

2.Some improvements can be made in the IR converted progress,for example,we cannot crop the dimension of the input tensor manually of each layer by using tf.slice,tf.keras.layers.Cropping2D, tf.image.resize_image_with_crop_or_pad,etc,I hope the future version of mo can support this.

Thanks for developing openvino,which helps me a lot for my project.  

 

 

0 Kudos
David_C_Intel
Employee
2,030 Views

Hi  赵, 明,

I am glad you could solve your issue. Thank you for your feedback, we really appreciate it. We will pass it to the development team for it to be checked.

If you have more questions, feel free to contact us again.

Best regards, 

David

0 Kudos
赵__明
Beginner
2,030 Views

Well,I have to come back  again...

Now My model can work on both CPU,MYRIAD,and also Rasberry Pi without any error message,but,there is a serious problem: the result of CPU is completely different from MYRIAD,and the MYRIAD result is wrong,not even close.

Now my model structure is the same to a classic Unet 2D model,the main difference is the input data,my data is not image data,it is a numpy array with shape [4,2048,1,3]  ,so I would like to ask:does MYRIAD only work for image data?Can we config the MYRIAD by ourself in order to make it work like CPU(because CPU will give correct results all the time)?We really hope we can deploy our model on Rasberry PI(because only MYRIAD is available)..

Hope Intel can provide at least some instructions on how to debug this.

0 Kudos
David_C_Intel
Employee
2,030 Views

Hi  赵, 明,

Thanks for reaching out again.

It is ok that the results differ from one device to the other, as a CPU has more processing capacity than a VPU. Although, they should not differ that much.

Could you please answer the following:

  1. Which OS are you working on?
  2. Did you update the OpenVINO™ toolkit to the 2020.1 version available?
  3. Please provide us the latest frozen model you are using and the model optimizer command used to convert it to IR format.
  4. Please provide us with a sample numpy array and its expected output.
  5. Could you share the results you get from CPU vs MYRIAD?

 

Regards,

David

0 Kudos
赵__明
Beginner
2,030 Views

1.I am using Ubuntu 18.04.3 LTS

2.Yes,I updated to 2020 and the same problem exists

3.the command is (with the attached model):

python3 /opt/intel/openvino_2020.1.023/deployment_tools/model_optimizer/mo_tf.py  --input_model  inference_graph.pb --input_shape [1,2048,1,3] --reverse_input_channels  --data_type FP16 --output_dir FP16

4 and 5.Please see the attached file.Please notice that the output (resXXX) can be plot out then you can see the clear difference.

Thanks for help

 

0 Kudos
David_C_Intel
Employee
2,030 Views

Hello  赵, 明,

Thanks for the information given.

Could you please send us the sample code and command used to run inference on the Intel® Neural Compute Stick 2 for us to test it on our end?

 

Best Regards,

David

0 Kudos
赵__明
Beginner
2,030 Views

This is the sample code to test the npz data.

The command is :

python3 classification_sample_new_by_station_and_network3.py -i npztestdata -m  inference_graph.xml -d CPU -dt npz -o output

you need to put the npz file in the npztestdata dir

Best regards

 

mz

0 Kudos
David_C_Intel
Employee
2,030 Views

Hello  赵, 明,

Thanks for your reply.

Could you please tell us which base model you used for training?

It is possible that the model you used is not supported by Myriad and is the reason why you see the wrong results.

Regards,

David

0 Kudos
赵__明
Beginner
2,030 Views

I am using U-net, the only difference is our data is one dimension,so the width of U-net is set to 1.

 

0 Kudos
David_C_Intel
Employee
2,031 Views

Hi  赵, 明,

Thank you for your patience.

Could you please try the following:

  • Try to turn off the VPU_HW_STAGES_OPTIMIZATION and re-test.
  • Add the following after declaring IECore() line 113:
if args.device == "MYRIAD":
    ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")

Regards, 

David

0 Kudos
Jay2
Novice
1,979 Views

Hi David,

I encountered the same problem that the results using CPU and MYRIAD are different (CPU is correct but MYRIAD is not). I have tried to set 

ie.set_config({'VPU_HW_STAGES_OPTIMIZATION': 'NO'}, "MYRIAD")

but it was not working.  I also used ie.get_config to check the status, it illustrated that the config was set. Is there any suggestion? Thanks.

BTW,  the result using VPU_HW_STAGES_OPTIMIZATION YES/NO are totally same. 

Jay2

0 Kudos
赵__明
Beginner
2,030 Views

Thank you very much!!

That solves the problem,although CPU (161ms) seem to be much faster than MYRIAD(2005 ms) in this case

0 Kudos
RAHUL
Beginner
2,030 Views

Can anyone help me to set correct --input_shape. I am getting these errors.

[ ERROR ]  Cannot infer shapes or values for node "dense_1/MatMul".
[ ERROR ]  'bool' object is not iterable
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_matmul_infer at 0x7fd1c1480ae8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  'bool' object is not iterable

Stopped shape/value propagation at "dense_1/MatMul" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 'bool' object is not iterable
Stopped shape/value propagation at "dense_1/MatMul" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

0 Kudos
Reply