Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

RuntimeError: AssertionFailed: isOrdersCompatible(_dimsOrder, dimsOrder)

Hyodo__Katsuya
Innovator
799 Views
Hello everyone. If Tensorflow's model is converted with accuracy of FP16, WARNING will be displayed, but conversion ends normally. However, loading the generated .bin and .xml with the test program will generate an error when plugin.load() is done. I converted it to FP32 and tried it the same way, but WARNING was not displayed and the test program worked normally. It seems that overflow of 'numpy.float 16' seems to have occurred, but is there an idea to avoid it? Just to be sure, I attach the generated FP16 model file. (FP16.zip) https://github.com/PINTO0309/Keras-OneClassAnomalyDetection.git - Convert script $ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ --input_model models/tensorflow/weights.pb \ --output_dir irmodels/tensorflow/FP16 \ --input input_1 \ --output global_average_pooling2d_1/Mean \ --data_type FP16 \ --batch 1 \ --log_level WARNING Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/xxxx/git/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb - Path for generated IR: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16 - IR output name: weights - Log level: WARNING - Batch: 1 - Input layers: input_1 - Output layers: global_average_pooling2d_1/Mean - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.5.12.49d067a0 [ WARNING ] Incorrect pattern attributes: not all nodes from edges are in nodes. Please, mention all nodes you need in pattern in nodes attribute. [ WARNING ] Incorrect pattern attributes: not all nodes from edges are in nodes. Please, mention all nodes you need in pattern in nodes attribute. [ WARNING ] 189 elements of 432 were clipped to zero while converting a blob for node [['Conv1/convolution']] to . For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #77. [ WARNING ] 48 elements of 128 were clipped to zero while converting a blob for node [['expanded_conv_project/convolution']] to . For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #77. [ WARNING ] 8 elements of 384 were clipped to zero while converting a blob for node [['block_1_expand/convolution']] to . For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #77. [ WARNING ] 16 elements of 768 were clipped to zero while converting a blob for node [['block_1_project/convolution']] to . For more information please refer to Model Optimizer FAQ (/deployment_tools/documentation/docs/MO_FAQ.html), question #77. [ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.xml [ SUCCESS ] BIN file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.bin [ SUCCESS ] Total execution time: 5.43 seconds. - Test program from openvino.inference_engine import IENetwork, IEPlugin model_xml="weights.xml" model_bin="weights.bin" net = IENetwork(model=model_xml, weights=model_bin) plugin = IEPlugin(device="MYRIAD") exec_net = plugin.load(network=net) #---- Error occurrence place - Error Message Traceback (most recent call last): File "_openvino_modelload_test.py", line 6, in exec_net = plugin.load(network=net) File "ie_api.pyx", line 389, in openvino.inference_engine.ie_api.IEPlugin.load File "ie_api.pyx", line 400, in openvino.inference_engine.ie_api.IEPlugin.load RuntimeError: AssertionFailed: isOrdersCompatible(_dimsOrder, dimsOrder)
0 Kudos
1 Solution
Lee__Sangyun
Novice
799 Views

Hi, Katsuya-san,

The problem might be from the last layer(probably reduce_mean layer......)

I added an argument(keepdims=True) to the layer.

Although the additional code for squeezing(like np.squeeze), The model seems to work anyway...

Thank you.

View solution in original post

0 Kudos
6 Replies
nikos1
Valued Contributor I
799 Views

Hello Katsuya-san,

Nice project, very inspiring.

I also tried the attached pb and generated FP32/FP16 IR. I did not try Python but in my limited C++ tests I could not see any issues with FP32 CPU. In addition FP16 GPU worked fine too, in a limited test use case. When I tried on NCS and NCS2 I got ncAPI errors. It seems to me that the issue may be more specific to MYRIAD rather than FP32 vs. FP16.

Do you have a Core CPU system with an Intel HD GPU to try your FP16 IR ?  If your FP16 IR works with -d GPU then it could be that there is some issue with some unsupported operation on MYRIAD NCS(2). The GPU test may help us to narrow down the issue.

Best regards,

nikos

0 Kudos
Hyodo__Katsuya
Innovator
799 Views
Thank you as always. Nikos. >Do you have a Core CPU system with an Intel HD GPU to try your FP16 IR ? Yes. I own it. >If your FP16 IR works with -d GPU then it could be that there is some issue with some unsupported operation on MYRIAD NCS(2). The GPU test may help us to narrow down the issue. Certainly, you are right. I forgot to verify with GPU. I think I will give it a try as soon as I got home today. If it is a problem of NCAPI it is very sad...
0 Kudos
Hyodo__Katsuya
Innovator
799 Views
@nikos - Environmet LattePanda Alpha Ubuntu16.04 Intel HD Graphics 615 FP16/FP32 Python "FP16 + GPU" and "FP32 + GPU" worked normally. It is very regrettable and it is sad... If an unsupported layer is included, it is hard to understand unless "Unsupport layer error" is displayed. I will review the structure of the model or review the output layer.
0 Kudos
Lee__Sangyun
Novice
799 Views

Hi, Katsuya-san,

Did you solve the above issue?

I have encountered the same issue, but i can't found any solution.

My network is very simple(just MobileNet for classification)... 

So i don't think that it is from the unsupported layer on MYRIAD...

Thank you.

 

0 Kudos
Lee__Sangyun
Novice
800 Views

Hi, Katsuya-san,

The problem might be from the last layer(probably reduce_mean layer......)

I added an argument(keepdims=True) to the layer.

Although the additional code for squeezing(like np.squeeze), The model seems to work anyway...

Thank you.

0 Kudos
Hyodo__Katsuya
Innovator
799 Views
@Lee, Sangyun Since the behavior of NCS2 is obviously wrong, I changed to a policy to implement in Tensorflow Lite. Even without using NCS2, sufficient performance can be obtained only with the ARM CPU. I tuned Tensorflow Lite its own speeding up. RaspberryPi3 + MobileNetV2 + LOC + CPU Only (14 FPS, NCS2 / NCS unused.) https://github.com/PINTO0309/Keras-OneClassAnomalyDetection#13-6-keras---tensorflow---tensorflow-lite Fast tuned Tensorflow Lite https://github.com/PINTO0309/Tensorflow-bin.git My article (Japanese) https://qiita.com/PINTO/items/0a52062cb6ebe9ef5051
0 Kudos
Reply