Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

how to add support to FusedBatchNormV3 (tensorflow) in model optimizer?

Alves_Tasca__Arthur
2,196 Views

I am trying to understand how to add support for the TensorFlow layer FusedBatchNormV3 at the model optimizer of OpenVino. I am running on an Ubuntu 18.03 and using Tensorflow 15.

My goal is to do several tests with some pre-trained standard network on the Neural Computer Stick 2, and I am working with ResNet50 by now. I have downloaded the network as follows:

import tensorflow as tf
keras = tf.keras

input_shape = (200,200,3)
model = keras.applications.resnet50.ResNet50(input_shape=input_shape,
                                              include_top=False, 
                                              weights='imagenet')

After I have frozen `model` as described in this post.

I am running the model optimizer with the command:

sudo python3 mo.py \
--input_model <PATH_TO_MODEL>\model.pb \
--output_dir <PATH_TO_MODEL>\ \
--data_type FP16 \
-b 1

But I am getting this error message:

[ ERROR ]  1 elements of 64 were clipped to infinity while converting a blob for node [['conv1_bn_1/cond/FusedBatchNormV3_1/ReadVariableOp_1/Output_0/Data__const']] to <class 'numpy.float16'>. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #76. 
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      FusedBatchNormV3 (53)
[ ERROR ]          conv1_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]          conv2_block1_0_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]          conv2_block1_1_bn_2/cond/FusedBatchNormV3_1
...
[ ERROR ]          conv5_block3_3_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

I have found this forum post suggesting to downgrade TensorFlow to version 13, but after doing so I also have got in another error with the same layer:

[ ERROR ]  Cannot infer shapes or values for node "conv1_bn_1/cond/FusedBatchNormV3_1".
[ ERROR ]  Op type not registered 'FusedBatchNormV3' in binary running on <USER>. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

My current idea is to add support for the FusedBatchNormV3 by using the Sub-Graph replacement introduced in the model optimizer (described at this page). I would like to replace the function FusedBatchNormV3 by the ScaleShift operation, since here FusedBatchNorm is said to be associated to it, but I do not know how to find this ScaleShift object. Can someone please help me?

0 Kudos
8 Replies
David_C_Intel
Employee
2,196 Views

Hello Arthur,

Thank you for reaching out.

Could you please send us your frozen model and pipeline.config files for us to test on our end?

 

Regards,

David

0 Kudos
Alves_Tasca__Arthur
2,196 Views

Dear David,

The frozen model is available in this link, since it is too big to be attached here as a normal file. 

I have not specified any pipeline.config. If you want me to send it anyway, please let me know where I can find the default one so that I can send it here. I am sending the log prompted when I launch the model optimizer, though:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/arthur/Desktop/master_thesis/models/ResNet50/tf_model.pb
	- Path for generated IR: 	/home/arthur/Desktop/master_thesis/models/ResNet50/
	- IR output name: 	tf_model
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	2019.3.0-375-g332562022

B.r.,

Arthur.

0 Kudos
David_C_Intel
Employee
2,196 Views

Hi Arthur,

Thank you for your reply.

Regarding this issue for FusedBatchNormV3, a fix for it should be implemented in an upcoming release.

If you have more questions, feel free to contact us again.

Best regards,

David

0 Kudos
Bone__Joshua
Beginner
2,196 Views

Hi David,

Do you have an estimate of when to expect this fix? I am also having this issue.

Thank you

DavidC (Intel) wrote:

Hi Arthur,

Thank you for your reply.

Regarding this issue for FusedBatchNormV3, a fix for it should be implemented in an upcoming release.

If you have more questions, feel free to contact us again.

Best regards,

David

0 Kudos
David_C_Intel
Employee
2,196 Views

Hello Joshua,

Thanks for reaching out! 

Intel does not comment on future software release dates. We will make an announcement once it is available for download, please keep an eye on the forum.

Regards, 

David 

0 Kudos
Alves_Tasca__Arthur
2,196 Views

Dear David,

I am glad to hear that the issue will be solved for the next release of your SDK, but it still does not really answer my question: how can I add support to a specific custom layer? It involves most importantly understanding how can I find which layer of your model representation backend that matches the layer I have in hands so that I can follow the instructions for custom layers in your webpage.

If you (intel) could help me adding support for this layer through something similar to what I have proposed, it would be helpful for other people in the future adding support to custom layers on their own.

Best regards,

Arthur.

0 Kudos
David_C_Intel
Employee
2,196 Views

Hello Arthur,

Thanks for your clarification, I understand better your request and we will try our best to help you with it. I have found this resource online, although it hasn't been validated by Intel it might be of help:

https://github.com/david-drew/OpenVINO-Custom-Layers

I don't have any other suggestions on how to do this at the moment, but we will get back to you as soon as we have something else to share.

 

Regards,

David

0 Kudos
David_C_Intel
Employee
2,196 Views

Hi Arthur,

Could you please try and convert your model using the latest OpenVINO™ toolkit release 2020.1.023?

Let us know if you have further questions.


Best regards,

David

 

 

0 Kudos
Reply