Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error FusedBatchNormV3 for Model Optimizer

Yasunori__Aoki
Beginner
1,426 Views

Hello

I am trying to IR convert a learning model that has been transferred based on COCO using Colaboratory for use in NCS2.

Running Model Optimizer results in the following error:

Can you tell me a workaround?

Thank you!!

 

[version]

tensorflow Version: 1.15.0

l_openvino_toolkit_dev_ubuntu18_p_2019.3.376.tgz

 

[mo_tf cmd]

!python mo_tf.py --input_model /content/output_inference_graph/frozen_inference_graph.pb \

--tensorflow_use_custom_operations_config /content/output_inference_graph/ssd_v2_support.json \

--tensorflow_object_detection_api_pipeline_config /content/output_inference_graph/pipeline.config \

--data_type FP16 \

--reverse_input_channels \

--output_dir /content/output/origin/ssd_mobilenet_v2_coco_2018_03_29

 

[error code]

Model Optimizer arguments: Common parameters: - Path to the Input Model: /content/output_inference_graph/frozen_inference_graph.pb - Path for generated IR: /content/output/origin/ssd_mobilenet_v2_coco_2018_03_29 - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: True TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /content/output_inference_graph/pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: /content/output_inference_graph/ssd_v2_support.json Model Optimizer version: 2019.3.0-408-gac8584cb7 The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] List of operations that cannot be converted to Inference Engine IR:

[ ERROR ] FusedBatchNormV3 (35)

[ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_depthwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_pointwise/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_2_1x1_256/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_3_1x1_128/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_3_3x3_s2_256/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_4_1x1_128/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_5_1x1_64/BatchNorm/FusedBatchNormV3 [ ERROR ] FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/FusedBatchNormV3 [ ERROR ] Part of the nodes was not converted to IR. Stopped. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24.

 

0 Kudos
10 Replies
David_C_Intel
Employee
1,426 Views

Hello Aoki,

Thanks for reaching out.

This seems to be a known issue that should be fixed in an upcoming release.

Could you please send us your model for us to test it on our end?

 

Regards,

David

0 Kudos
Yasunori__Aoki
Beginner
1,426 Views

Hello David

Thank you for replying.

It is possible to send a model.

Is there a release planned for January?

In some cases, the model transformation of transfer learning has been successful.
Can I work around this by downgrading the Openvino package?

Regards,

Aoki

0 Kudos
David_C_Intel
Employee
1,426 Views

Hello Aoki,

Thank you for your patience.

Could you please answer the following:

 - Did you make any modifications to the configuration file (json)?

 - Which version of python & Tensorflow did you use to train the model?

As for your other question, we can't comment on future software release dates and it is likely the previous software releases may have the same behavior, we recommend using the latest release.


Best regards,

David

0 Kudos
Yasunori__Aoki
Beginner
1,426 Views

Hi David

Thank you for replying.

Sorry for the late contact.

>Did you make any modifications to the configuration file (json)?

 Yes , I'm making it based on ssd_v2_support.json

  The json file is attached with extension c.

>Which version of python & Tensorflow did you use to train the model?

  tensorflow 1.15.0

  Python 3.6.9

Regards,

Aoki

 

0 Kudos
David_C_Intel
Employee
1,426 Views

Hi Aoki,

 

Thank you for the providing the .json file.

We tested it using the latest OpenVINO™ toolkit version (2020.1) and it ran successfully with this command:

python3 /opt/intel/openvino_2020.1.023/deployment_tools/model_optimizer/mo_tf.py --input_model content/output_inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config content/output_inference_graph/ssd_v2_support_cast1.json --tensorflow_object_detection_api_pipeline_config content/output_inference_graph/pipeline.config --data_type FP16 --reverse_input_channels --output_dir content/output/origin/ssd_mobilenet_v2_coco_2018_03_29

 

Be free to reach out again if you have additional questions.

Best regards,

David

 

 

0 Kudos
Yasunori__Aoki
Beginner
1,426 Views

Hi David,

Thanks for the very good news

What I need is the OpenVINO ™ toolkit for Linux *, but I don't know where to download the latest OpenVINO ™ toolkit version (2020.1).

I downloaded l_openvino_toolkit_runtime_raspbian_p_2020.1.023.tgz from the following site, but did not include the model optimizer.
https://download.01.org/opencv/2020/openvinotoolkit/2020.1/

The following questions confirm that the runtime_raspbian package does not include a model optimizer.
https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/849209

Best regards,

Aoki

0 Kudos
David_C_Intel
Employee
1,426 Views

Hi Aoki,

You are right, as stated in the thread you referred to, the OpenVINO™ toolkit for Raspbian OS package does not include the Model Optimizer.

If you want to run inference in the Raspberry Pi, you will need to get the IR files by using the Model Optimizer on a Windows, Linux or Mac OS system, convert the model to IR format and move those files to the Raspberry Pi.

You can download the latest  OpenVINO™ toolkit release here.

Best regards,

David

0 Kudos
Yasunori__Aoki
Beginner
1,426 Views

 

Hi David,

With your support, errors in IR model conversion have been resolved.

Thank you very much.

 

However, when I execute the following code in Raspberry Pi using "l_openvino_toolkit_runtime_raspbian_p_2020.1.023", an error occurs.Does this error indicate that the IR model has not been converted correctly?

 

net = cv2.dnn.readNet('frozen_inference_graph.xml','frozen_inference_graph.bin')

 

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Check 'axis < static_cast<size_t>(input_rank)' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/gather.cpp:140:
While validating node 'Gather[Gather_1718](patternLabel_1714: float{10,20,30}, patternLabel_1715: int64_t{5}, patternLabel_1717: int64_t{1}) -> (??)':
The axis must => 0 and <= input_rank (axis: 4294967295).

Backend terminated (returncode: -6)
Fatal Python error: Aborted

Current thread 0x76fa3ce0 (most recent call first):

 

Best regards,

Aoki

0 Kudos
David_C_Intel
Employee
1,426 Views

Hi Aoki,

Thank you for your reply.

This is an issue with the latest OpenVINO™ toolkit version (2020.1). The "axis" error is caused by the new IR v10 format. 

You can generate the model with the following parameter and test it: 

--generate_deprecated_IR_V7

If it does not work, we recommend you installing the previous version of OpenVINO™ toolkit (2019 R3.1) on a Windows, Linux or Mac OS system, convert the model there, which will generate the IR v7 format and that should work successfully with the Intel® Neural Compute Stick 2 on the raspberry pi.

Best regards,

David

0 Kudos
Yasunori__Aoki
Beginner
1,426 Views

Hi David,

Thank you for the option setting advice.
It worked by setting this option.

Best regards,

Aoki

0 Kudos
Reply