Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVino - Unet - Model Optimizer error

Pedroska
Beginner
897 Views

Hello,

 

I want to convert my customized Unet model to IR through latest OpenVino toolkit optimizer. I got errors about input sizes and could solve it by neming the inputs and using the flags. Currently I stocked in the following error which it is againg complaining about the input shape in batch normalization which is not on my hand to explecitly define it. Can anybody help?

tensorflow 1.15.0 , 

Command:

sudo python3 mo_tf.py --input_model /home/svmi/freez/tf_model/customize1.15/output_graph_15.pb --input_shape [1,128,128,1] --input input_X 

Model Optimizer arguments: 

Common parameters: 

    - Path to the Input Model:     /home/svmi/freez/tf_model/customize1.15/output_graph_15.pb 

    - Path for generated IR:     /opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/. 

    - IR output name:     output_graph_15 

    - Log level:     ERROR 

    - Batch:     Not specified, inherited from the model 

    - Input layers:     input_X 

    - Output layers:     Not specified, inherited from the model 

    - Input shapes:     [1,128,128,1] 

    - Mean values:     Not specified 

    - Scale values:     Not specified 

    - Scale factor:     Not specified 

    - Precision of IR:     FP32 

    - Enable fusing:     True 

    - Enable grouped convolutions fusing:     True 

    - Move mean values to preprocess section:     False 

    - Reverse input channels:     False 

TensorFlow specific parameters: 

    - Input model in text protobuf format:     False 

    - Path to model dump for TensorBoard:     None 

    - List of shared libraries with TensorFlow custom layers implementation:     None 

    - Update the configuration file with input/output node names:     None 

    - Use configuration file used to generate the model with Object Detection API:     None 

    - Use the config file:     None 

Model Optimizer version:      

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint8 = np.dtype([("qint8", np.int8, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint16 = np.dtype([("qint16", np.int16, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint32 = np.dtype([("qint32", np.int32, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  np_resource = np.dtype([("resource", np.ubyte, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint8 = np.dtype([("qint8", np.int8, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint16 = np.dtype([("qint16", np.int16, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  _np_qint32 = np.dtype([("qint32", np.int32, 1)]) 

/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. 

  np_resource = np.dtype([("resource", np.ubyte, 1)]) 

[ ERROR ]  Exception occurred during running replacer "fusing" (<class 'extensions.middle.fusings.Fusing'>): After partial shape inference were found shape collision for node batch_normalization/FusedBatchNormV3/beta (old shape: [  1 128 128  64], new shape: [  1 128 128  -1]) 

 

Please let me know if you need more information.

 

0 Kudos
2 Replies
Munesh_Intel
Moderator
870 Views

Hi Pedroska,

 

OpenVINO toolkit supports UNet topology from the following link:

https://github.com/kkweon/UNet-in-Tensorflow

 

More information is available at the following page:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#supported_topologies

 

Are you using model from the same repository? If not, maybe you can try the workaround provided by our community member in the following thread:

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/MobileNetV3-SSD-FusedBatchNormV3-layer-not-supported-OpenVINO/m-p/1131856

 

Regards,

Munesh

 

0 Kudos
owaisali
Novice
788 Views

hello sir 

i am using Unet for biomedical image segmentation and used original code from the Unet paper .Is it possible to deploy this network over movidius NCS 1?

0 Kudos
Reply