Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Keras TF based mnist CNN model conversion to IR failed

Hsin__Ming
Beginner
897 Views

I am trying to convert my CNN model for mnist dataset trained using Keras with Tensorflow backend to IR format using mo.py in Openvino release 2019.1.133 but failed.

I am using the following command to create the IR but got error:

mo.py --input_model trans_model/inference_graph.pb --input_shape [1,28,28,1]

Model summary from keras and mo.py messages below:

Model summary:
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 28, 28, 16)        416       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 14, 14, 32)        12832     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 32)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 7, 7, 32)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1568)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               200832    
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 215,370
Trainable params: 215,370
Non-trainable params: 0
_________________________________________________________________

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/user/data/ML_SVN/Machine/mnist/trans_model/inference_graph.pb
        - Path for generated IR:        /home/user/data/ML_SVN/Machine/mnist/.
        - IR output name:       inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,28,28,1]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.1.0-341-gc9b66a2
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.middle.ConvertGroupedStridedSlice.ConvertGroupedStridedSlice'>)": list index out of range
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/extensions/middle/ConvertGroupedStridedSlice.py", line 111, in find_and_replace_pattern
    prev_sd = sorted_split_dims[0]
IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo/main.py", line 312, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 128, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 190, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.middle.ConvertGroupedStridedSlice.ConvertGroupedStridedSlice'>)": list index out of range

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

And I attach debug log to this post.

Any help in debugging this error is much appreciated.

Thanks!

0 Kudos
1 Solution
Shubha_R_Intel
Employee
897 Views

Dear Hsin, Ming, 

is your model a frozen model ? You must freeze your Tensorflow model before feeding it into mo_tf.py . For freezing, you must know the hierarchical output layer name(s).

Below is documentation regarding freezing a Tensorflow model:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model

Thanks,

Shubha

View solution in original post

0 Kudos
4 Replies
Shubha_R_Intel
Employee
898 Views

Dear Hsin, Ming, 

is your model a frozen model ? You must freeze your Tensorflow model before feeding it into mo_tf.py . For freezing, you must know the hierarchical output layer name(s).

Below is documentation regarding freezing a Tensorflow model:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model

Thanks,

Shubha

0 Kudos
Hsin__Ming
Beginner
897 Views

Dear Shubha R. (Intel):

Thanks for your reply. It has been fixed just by change freeze model method refer to https://stackoverflow.com/questions/45466020/how-to-export-keras-h5-to-tensorflow-pb.

My old method only frozen 8 variables and got error when convert model, after use new method are frozen all 37 variables.

Output attach to this comment.

Thanks again,

Hsin.

0 Kudos
川李000
Beginner
897 Views

Dear Hsin, Ming:

    Can you give me keras to tensorflow python code, I also encountered the same problem.

    thanks and best regards!!!

    lichuan

0 Kudos
Hsin__Ming
Beginner
897 Views

Dear lichuan:

You can follow my model conversion codes to overcome difficulties.

https://github.com/HsinM/OpenVINO-NCS/tree/master/win_linux_code

Good luck.

Hsin

0 Kudos
Reply