Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

ERROR DURING CONVERSION OF CUSTOM TENSORFLOW OBJECT DETECTION MODEL

Fraccaroli__Michele
1,108 Views

Hi to everyone, I've a problem during conversion of ssd_mobilenet_v2_coco model retrained with my personal dataset in .xml and .bin file.

I've used OpenVINO R3.1

If I download this model and convert it without retraining with this command:

python3 opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_confi /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config --data_type FP16

all works fine.

 

But If I retrain the network with TensorFlow Object Detection API with this configuration file:

model {
  ssd {
    num_classes: 90
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 200
        width: 200
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v2'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          variance_scaling_initializer {
            factor: 1.0
            uniform: true
            mode: FAN_AVG
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
        }
      }
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 3
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}
train_config {
  batch_size: 128
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    random_vertical_flip {
    }
  }
  data_augmentation_options {
    random_black_patches {
    }
  }
  data_augmentation_options {
    random_adjust_contrast {
    }
  }
  data_augmentation_options {
    random_adjust_hue {
    }
  }
  data_augmentation_options {
    random_adjust_saturation {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
  data_augmentation_options {
    random_rotation90 {
    }
  }
  optimizer {
    rms_prop_optimizer {
      learning_rate {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.00400000018999
          decay_steps: 800720
          decay_factor: 0.949999988079
        }
      }
      momentum_optimizer_value: 0.899999976158
      decay: 0.899999976158
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "/galileo/home/userexternal/mfraccar/VirtualPython/ami_project/Object_detection/research/object_detection/training/ssd_mobilenet_v2_coco/model.ckpt"
  num_steps: 200000
  from_detection_checkpoint: true
  fine_tune_checkpoint_type: "detection"
}
train_input_reader {
  label_map_path: "/galileo/home/userexternal/mfraccar/VirtualPython/ami_project/Object_detection/research/object_detection/test_data/Dataset/label_map.pbtxt"
  tf_record_input_reader {
    input_path: "/galileo/home/userexternal/mfraccar/VirtualPython/ami_project/Object_detection/research/object_detection/test_data/Dataset/tfr_data/train.record"
  }
}
eval_config {
  num_examples: 91
  num_visualizations: 30
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "/galileo/home/userexternal/mfraccar/VirtualPython/ami_project/Object_detection/research/object_detection/test_data/Dataset/label_map.pbtxt"
  shuffle: false
  num_readers: 1
  tf_record_input_reader {
    input_path: "/galileo/home/userexternal/mfraccar/VirtualPython/ami_project/Object_detection/research/object_detection/test_data/Dataset/tfr_data/test.record"
  }
}

and exporting the inference graph with this command:

python3 export_inference_graph.py --input_type tf_example --pipeline_config_path training/ssd_mobilenet_v2_coco/pipeline.config --trained_checkpoint_prefix training/ssd_mobilenet_v2_coco/model.ckpt --output_directory training/Exported_model

I obtain this output:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: <path to frozen_graph>/frozen_inference_graph.pb
- Path for generated IR: <path to frozen_graph>.
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: <path to frozen_graph>/pipeline.config
- Operations to offload: None
- Patterns to offload: None
- Use the config file: <path to frozen_graph>/../../intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version: 2019.3.0-408-gac8584cb7
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ] 0
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Cast.infer at 0x7f04e7384c80>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

I have test the training and the exportation of inference graph with TensorFlow 1.13 and 1.15 and I retrieve the same error.

There is a way to fix it??

I need to obtain the xml and bin file for make inference with Neural Stick 2.

Please help me!!!

0 Kudos
3 Replies
Sahira_Intel
Moderator
1,108 Views

Hi Michele,

Use the ssd_support_api_v1.14.json config file, but edit it to change Postprocessor/Cast to Postprocessor/Cast_1

Please let me know if this information is helpful.

Best Regards,

Sahira 

0 Kudos
Fraccaroli__Michele
1,108 Views

Hi, I do this but the problem is that now, the error is given form the FusedBatchNormV3 layer.

I have fixed this issue using the old version of ssd_mobilenet_v1_2017. But with every version of 2018 I have an error with FusedBatchNormV3 layer.

0 Kudos
Sahira_Intel
Moderator
1,108 Views

Hi Michele,

Are you getting the following error?

 

[ ERROR ] List of operations that cannot be converted to Inference Engine IR: [ ERROR ] FusedBatchNormV3 (76)

Is the error also given while running both TF V1.13 and V1.15? 

You might need to offload this unsupported op to Tensorflow for computation. Look at the documentation for how to do this hereHere is an example of this written by an OpenVINO community member that might be helpful to you. Can you please provide your retrained model so I can run on my end if these workarounds are giving errors? (Please let me know if you'd like to send me your model over a PM).

I hope this information is helpful.

Best Regards,
Sahira 

 

 

 

 

0 Kudos
Reply