Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Karmeo
Novice
583 Views

Custom model maskrcnn Tensorflow 2.0 Object detection API not convertation for model optimizer

Hi. According to the previous tips, I reinstalled the new version of model optimizer and retrained the maskrcnn model, following the example from this article:

https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api

After training the model, I freeze the model using a script in object_detection_API directory:

python exporter_main_v2.py \ --trained_checkpoint_dir training
--output_directory inference_graph
--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config

After this script, I get the saved model and pipeline files, which should be used in OpenVInO in the future The following error occurs when uploading the received files to model optimizer:

Model Optimizer version: 2020-08-20 11:37:05.425293: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll [ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file. The file should contain one of the following TensorFlow graphs:

  1. frozen graph in text or binary format
  2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
  3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.

On gpu, the model starts and works, but I need to get the converted model for OpenVINO

0 Kudos
6 Replies
Iffa_Intel
Moderator
560 Views

Greetings,


Im sure you had noticed, there are 3 ways to convert models into IR:

  1. Checkpoint: python3 mo_tf.py --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
  2. Metagraph: python3 mo_tf.py --input_meta_graph <INPUT_META_GRAPH>.meta
  3. SavedModel: python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>


Based from your details, I believe you are trying the option number 3 as you are using saved_model.pb. Instead of --input_model you should be using --saved_model_dir as I mentioned above.


You can refer here for further infos: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...



Sincerely,

Iffa


Karmeo
Novice
549 Views

Hello! I executed the command on your advice:
python. \ mo_tf.py --saved_model_dir "C: \ inference_graph (1) \ inference_graph \ saved_model"


The following errors occurred while running the script:

[ ERROR ] Shape [ 1 -1 -1 3] is not fully defined for output 0 of "input_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "input_tensor".
[ ERROR ] Not all output shapes were inferred or fully defined for node "input_tensor".
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Parameter.infer at 0x0000022830154048>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "input_tensor" node.

Karmeo
Novice
543 Views

I decided to add the parameter --input_shape [1,500,500,3], because in the custom pipeline file obtained after training the model, the following settings are written:


model {
faster_rcnn {
number_of_stages: 3
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 500
width: 500
}

Errors:

[ ERROR ] Cannot infer shapes or values for node "StatefulPartitionedCall/Preprocessor/unstack/Squeeze_".
[ ERROR ] Trying to squeeze dimension not equal to 1 for node "StatefulPartitionedCall/Preprocessor/unstack/Squeeze_"
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Squeeze.infer at 0x00000232A84AF1E0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/Preprocessor/unstack/Squeeze_" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Iffa_Intel
Moderator
508 Views

Greetings,


There are several reasons why you are getting those errors & I believed you had noticed it:

  1. It can happen due to bug in custom shape infer function
  2. node inputs have incorrect values/shapes
  3.  input shapes are incorrect (embedded to the model or passed via --input_shape)

I suggest that you take a look on this tutorial for detailed infos:

https://www.youtube.com/watch?v=cbdS3BjjbaQ


Instead of mobilenet, you proceed with one of mask_rcnn model that is available: python download.py --name mask_rcnn_inceotion_v2_coco


(python download.py --print_all : to see what are available)


These models are guaranteed to be compatible with openvino.



Sincerely,

Iffa


_Mai_
Novice
451 Views

I tried following that tutorial with a mobilenet ssd v2 with the same command and this one 

python /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_shape [1,300,300,3] \
--saved_model_dir /home/mai/ssd_ckpt/frozen_models/ssd_mobilenet_v2_320x320_coco17_tpu-8freeze_fext_5kbshuffle_lrwrmp.5_50stps/saved_model/ \
--model_name Vehicle_ssd_fp5k16 --data_type FP16 \
--tensorflow_object_detection_api_pipeline_config /home/mai/ssd_ckpt/frozen_models/ssd_mobilenet_v2_320x320_coco17_tpu-8freeze_fext_5kbshuffle_lrwrmp.5_50stps/pipeline.config \
--tensorflow_use_custom_operations_config /opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json 

and I get this error in both cases 

[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIPreprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Shape is not defined for output 1 of "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_4/NonMaxSuppressionV5".
[ ERROR ]  Cannot infer shapes or values for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_4/NonMaxSuppressionV5".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_4/NonMaxSuppressionV5". 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function NonMaxSuppression.infer at 0x7fc739873b70>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_4/NonMaxSuppressionV5" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 
Iffa_Intel
Moderator
480 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Iffa


Reply