Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model Optimizer not able to convert Custom Trained model.

Hern__Yee
Novice
2,394 Views

Hi, 

In my project , I am usign the "faster_rcnn_resnet101_coco" from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md. ;

I downloaded model and retrain it using my own custom dataset with 9 classes.  

After i done retraining, the model is tested using tensorflow object detection and successfully inference with 9 object being detected. 

Then i pass my custom trained faster_rcnn_resnet101 for model optimizer to convert to OpenVINO IR

 

After a few try and error, i found an interesting issue. 

1) "faster_rcnn_resnet101_coco" model i download from tensorflow model zoo, the pipiline.config set "num_classes: 90 " - SUCCESS convert to IR model

2) Custom trained faster_rcnn_resnet101 model, the pipiline.config set "num_classes: 9 " - FAILED  to convert 

3) Custom trained faster_rcnn_resnet101 model, the pipiline.config set "num_classes: 1 " - SUCCESS convert to IR model

** all model conversion run with same command. Just different on "pipeline.config" "num_classes:"

Somehow, the "num_class" in the pipeline have significant impact on converting model. 

After convert, i able to inference the Custom trained faster_rcnn_resnet101 model, but it only detect 1 class of item only. 

Any idea why this happen?

 

-Yee Hern

0 Kudos
1 Solution
SerkanUygungelen
Employee
2,394 Views

Hi Yee Hern,

Regarding the usage of the parameters stated in pipeline file, this configuration file describes the topology of hyper-parameters and structure of the TensorFlow model. When generating an IR, the model optimizer makes use of this configuration parameters that is why this paramters should align with the trained model. For instance, if you change the num_class of "faster_rcnn_resnet101_coco" model from tensorflow model zoo then you will also get an error due to a mismatch in reshape node do_reshape_conf (the last reshape layer before the output).

Secondly, I had a look at the parameters of the IR file of your model with num_class=1. The size of the nodes does not match the expected behaviour of the original model. Somehow, the model does not match the parameters defined in the pipeline configuration. The expected output of the original model (i.e. output of the last DetectionOutput layer ) is 1x1x100x7 (where 100 is the number of detected bounding boxes). Besides, the do_reshape_conf layer (applied just before the output), should have an output size of 1X (Nx100). On the other hand, when I check the IR file of the model you have sent, the output layer has a size of 1x1x700x7 and the output of the do_reshape_conf has a size of 7x200. As you can see, although the model is generated, the sizes do not match to the expected values. Also, DetectionOutput  in the generated IR supports only one class, that is why your IR file detects only one class of items.

The generated model somehow supports num_class =13 (according to the parameters that I can see in the IR) and when I set the num_classes in the pipeline.conig file as 13, I can manage to generate IR where the last DetectionOutput layer supports 13 classes with an expected output of 1x1x100x7. Could it be possible that somehow your custom trained model is supporting 13 classes instead of 9? Could you also check with this setting your model can detect more classes?

As a final note, I have used the following command to generate the IR:

--input_model frozen_inference_graphFasterRCNN.pb --tensorflow_object_detection_api_pipeline_config pipeline.config  --reverse_input_channels ----transformations_config faster_rcnn_support.json --input_shape [1,600,1024,3]  --input image_tensor --output detection_scores,detection_boxes,num_detections

 

View solution in original post

0 Kudos
7 Replies
JAIVIN_J_Intel
Employee
2,394 Views

Hi,

Could you please specify the command used to convert the model IR and the entire log observed when the issue happens.

You may also refer to the steps for Converting TensorFlow* Object Detection API Models to know about the required parameters.

Regards,

Jaivin

0 Kudos
Hern__Yee
Novice
2,394 Views

With the exact same command below, i changed the pipeline.config num_classes to 1, it will success convert.

The command  : 

python mo_tf.py --input_model=frozen_inference_graphFasterRCNN.pb --tensorflow_use_custom_operations_config=faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config=pipeline.config --reverse_input_channels

 

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      E:\AI\Model Optimizer\model_optimizer 2019-R3\frozen_inference_graphFasterRCNN.pb
        - Path for generated IR:        E:\AI\Model Optimizer\model_optimizer 2019-R3\.
        - IR output name:       frozen_inference_graphFasterRCNN
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  E:\AI\Model Optimizer\model_optimizer 2019-R3\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  E:\AI\Model Optimizer\model_optimizer 2019-R3\faster_rcnn_support.json
Model Optimizer version:        2019.3.0-408-gac8584cb7
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ ERROR ]  Cannot infer shapes or values for node "do_reshape_conf".
[ ERROR ]  Number of elements in input [100  14] and output [1, 1000] of reshape node do_reshape_conf mismatch
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Reshape.infer at 0x0000023BD552EBF8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Number of elements in input [100  14] and output [1, 1000] of reshape node do_reshape_conf mismatch
Stopped shape/value propagation at "do_reshape_conf" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Number of elements in input [100  14] and output [1, 1000] of reshape node do_reshape_conf mismatch
Stopped shape/value propagation at "do_reshape_conf" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #3

0 Kudos
JAIVIN_J_Intel
Employee
2,394 Views

It seems like you are using an older version of OpenVINO. Please try converting the model after installing the latest version of OpenVINO(2020.2) and let us know the results.

0 Kudos
Hern__Yee
Novice
2,394 Views

Hi , i tried using OpenVINO 2020 R2. 

The behaviour still the same. change the Pipeline.config , num_classes to 1 , then it will works. 

 

[ WARNING ]  Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      E:\AI\Model Optimizer\model_optimizer 2020-R2\frozen_inference_graphFasterRCNN.pb
        - Path for generated IR:        E:\AI\Model Optimizer\model_optimizer 2020-R2\.
        - IR output name:       frozen_inference_graphFasterRCNN
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  E:\AI\Model Optimizer\model_optimizer 2020-R2\pipeline.config
        - Use the config file:  E:\AI\Model Optimizer\model_optimizer 2020-R2\faster_rcnn_support.json
Model Optimizer version:        2020.2.0-60-g0bc66e26ff
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\Dafei\AppData\Local\Programs\Python\Python36\Lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ ERROR ]  Cannot infer shapes or values for node "do_reshape_conf".
[ ERROR ]  Number of elements in input [100  14] and output [1, 1000] of reshape node do_reshape_conf mismatch
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Reshape.infer at 0x0000015D8885F9D8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ]  Your model looks like TensorFlow Object Detection API Model.
Check if all parameters are specified:
        --tensorflow_use_custom_operations_config
        --tensorflow_object_detection_api_pipeline_config
        --input_shape (optional)
        --reverse_input_channels (if you convert a model to use with the Inference Engine sample applications)
Detailed information about conversion of this model can be found at
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "do_reshape_conf" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

0 Kudos
Hern__Yee
Novice
2,394 Views

Here is the FasterRCNN model and pipeline.config

 

0 Kudos
JAIVIN_J_Intel
Employee
2,394 Views

Hi Yee Hern,

Apologies for the late reply.

I have reproduced the issue with the model you have shared. I'm investigating the issue.

Meanwhile, please refer to this thread on which the user faced a similar issue due to conflict in python versions.

Which version of Tensorflow have you used for retraining?

Regards,

Jaivin

0 Kudos
SerkanUygungelen
Employee
2,395 Views

Hi Yee Hern,

Regarding the usage of the parameters stated in pipeline file, this configuration file describes the topology of hyper-parameters and structure of the TensorFlow model. When generating an IR, the model optimizer makes use of this configuration parameters that is why this paramters should align with the trained model. For instance, if you change the num_class of "faster_rcnn_resnet101_coco" model from tensorflow model zoo then you will also get an error due to a mismatch in reshape node do_reshape_conf (the last reshape layer before the output).

Secondly, I had a look at the parameters of the IR file of your model with num_class=1. The size of the nodes does not match the expected behaviour of the original model. Somehow, the model does not match the parameters defined in the pipeline configuration. The expected output of the original model (i.e. output of the last DetectionOutput layer ) is 1x1x100x7 (where 100 is the number of detected bounding boxes). Besides, the do_reshape_conf layer (applied just before the output), should have an output size of 1X (Nx100). On the other hand, when I check the IR file of the model you have sent, the output layer has a size of 1x1x700x7 and the output of the do_reshape_conf has a size of 7x200. As you can see, although the model is generated, the sizes do not match to the expected values. Also, DetectionOutput  in the generated IR supports only one class, that is why your IR file detects only one class of items.

The generated model somehow supports num_class =13 (according to the parameters that I can see in the IR) and when I set the num_classes in the pipeline.conig file as 13, I can manage to generate IR where the last DetectionOutput layer supports 13 classes with an expected output of 1x1x100x7. Could it be possible that somehow your custom trained model is supporting 13 classes instead of 9? Could you also check with this setting your model can detect more classes?

As a final note, I have used the following command to generate the IR:

--input_model frozen_inference_graphFasterRCNN.pb --tensorflow_object_detection_api_pipeline_config pipeline.config  --reverse_input_channels ----transformations_config faster_rcnn_support.json --input_shape [1,600,1024,3]  --input image_tensor --output detection_scores,detection_boxes,num_detections

 

0 Kudos
Reply