Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

The model performance about faster rcnn

zhong__songhui
Beginner
1,910 Views

I trained a detection model with faster rcnn structure, the model i used to finetune is inception_resnetv2. The resize method i used is same with resized method used in SSD, because i know when converting  model to IR, the input size should be fixed.

here is the resized method I used

image_resizer {
      fixed_shape_resizer {
        height: 600
        width: 800
      }
    }

After getting finetuned model which is trianed on my own dataset, I convert this model to IR successuflly. But when i use this converted IR to do inference, the model performance is very bad even for the training data. But I use pb file to do the inference, the result is terrific. Why is that? FYI, I convert SSD to IR and  use this IR to do inference successfully. But i cannot figure out why the performance is so bad when using faster rcnn IR.

The openvino used is Model Optimizer version:     2019.2.0-436-gf5827d4 the lastest one.

 

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,910 Views

Dear zhong, songhui,

So if I understand you correctly, the accuracy is fine but the performance on OpenVino compared to Tensorflow is terrible ? Correct ? What device are you testing on ? You can also use the benchmark_app to experiment with various performance attributes.

Let me know more,

Thanks,

Shubha

 

0 Kudos
zhong__songhui
Beginner
1,910 Views

Dear Shubha,

         Thanks for replying, really appreciate it. 

        The model performance what I refer to is about accuracy. The accuracy is so bad when using transferred IR model, because the transferred IR model can not even detect objects in the training images. How werid is that. The accuray is good when doing the inference with pb model in the tensorflow. And i did not figure out it why. 

 Hope for your help.

      Thanks

     songhuizhong

 

 

 

0 Kudos
Shubha_R_Intel
Employee
1,910 Views

Dear zhong, songhui,

OK thanks for clarifying. So what you really mean is that accuracy on OpenVino Inception ResNet v2 is horrible and Tensorflow is good. Please take a look at the Model Optimizer Tensorflow document. My guess is that you didn't tell Model Optimizer about the pre-processing. Note that the --mean_values and --scale columns have these values: [127.5,127.5,127.5]   127.5. You should pass these into Model Optimizer. There is no way for Model Optimizer to know about image processing before training unless you tell it so. 

You can also look at C:\Program Files (x86)\IntelSWTools\openvino_2019.2.275\deployment_tools\tools\model_downloader\list_topologies.yml and see this:

  file: inception_resnet_v2_2018_04_27.tgz
    model_optimizer_args:
      - --framework=tf
      - --data_type=FP32
      - --reverse_input_channels
      - --input_shape=[1,299,299,3]
      - --input=input
      - --mean_values=input[127.5,127.5,127.5]
      - --scale_values=input[127.50000414375013]
      - --output=InceptionResnetV2/AuxLogits/Logits/BiasAdd
      - --input_model=$dl_dir/inception_resnet_v2.pb
    framework: "tf"

Basically, many tensorflow (and even other flavor) models out in the wild undergo pre-processing before being trained. Also, you can't always assume what the output layer will be. Model Optimizer will chop the model off at the correct output layer by using --output . Also the input shape is important. If the model was trained with [1,299,299,3] then it's important that this same shape is fed into Model Optimizer also (for best results). 

Usually failing to feed in the correct pre-processing values to Model Optimizer produces abysmally bad accuracy. And if you think about this, there is no way for Model Optimizer to really "guess" this stuff...it's a graph compiler. Sophisticated code for sure but it needs the model designer to provide proper inputs.

Hope it helps,

thanks,

Shubha

 

0 Kudos
zhong__songhui
Beginner
1,910 Views

Dear Shubha,

         Thanks for your replaying. 

         The model I used is not a classification model, it is a detection model. And the backbone of the detection model is inception_resnetv2. Sorry for the things that I did not describe clearly. The problem is that converting a pretrained faster Rcnn PB model to IR, and use this IR to do the inference, the accuracy is bad, but the accuracy is good when doing inference with PB model in the Tensorflow

And the resize method I used is

 image_resizer {
      fixed_shape_resizer {
        height: 600
        width: 800
      }
    }

,

But the normal resized method used in the faster rcnn is 

image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }

And the reason I did not choose to use keep_aspect_ratio_resizer is that  when converting a model to IR, we have to fix the input size. Right?

I am afraid that if using keep_aspect_ratio_resizer in the training process, the accuracy would drop a lot after converting PB model to IR model.

So I decided to use fixed_shape_resizer in the faster Rcnn to avoid the drop of accuracy. But even i trained the faster rcnn model with this resize method.

The accuray of IR model is bad compared to PB model.

I also tried to convert my finetuned SSD model to IR, and use this SSD IR to do the inference, the accuracy is the same with the accuracy of PB model. But things go weird when finetune model is faster Rcnn with different backbones. The accuracy of IR model is so bad compared to the accuracy of PB model.

Hope for your help

Thanks, 

songhui zhong 

 

 

 

0 Kudos
Shubha_R_Intel
Employee
1,910 Views

Dear zhong, songhui,

Thanks for your information. Can you kindly point me to a link where you got the model ? Also what about pre-processing ? As I mentioned earlier, pre-processing switches into Model Optimizer make a difference. Did you use any of those switches as described above ? 

What model optimizer command did you use ?

And the reason I did not choose to use keep_aspect_ratio_resizer is that  when converting a model to IR, we have to fix the input size. Right?

From Model Optimizer keep aspect ratio doc

It says:

Keep Aspect Ratio Resizer Replacement

If the --input_shape command line parameter is not specified, the Model Optimizer generates an input layer with both height and width equal to the value of parameter min_dimension in the keep_aspect_ratio_resizer.

If the --input_shape [1, H, W, 3] command line parameter is specified, the Model Optimizer scales the specified input image height H and width W to satisfy the min_dimension and max_dimension constraints defined in the keep_aspect_ratio_resizer. The following function calculates the input layer height and width:

The key takeaway here is the if --input_shape is not specified to the Model Optimizer command line, it will assume  a square image with min_dimension size, taken from the pipeline.config .

Looking at pipeline.config for faster_rcnn_resnet50_coco_2018_01_28 I see this in pipeline.config:

faster_rcnn {
    num_classes: 90
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }

This model expects a keep_aspect_ratio_resizer. If you don't want Model Optimizer to assume 600x600, then you must pass in something else via --input_shape.

Does this answer your question ?

Thanks,

Shubha

0 Kudos
zhong__songhui
Beginner
1,910 Views

Dear Shubha,

Thanks for replying. 

The model i used to finetune is download from the official website of tensorflow object detection which is:

http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz

And the model optimizer command is: 

mo_tf.py --input_model=/frozen_inference/path/to/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.13.json --tensorflow_object_detection_api_pipeline_config /pipline/path/to/pipeline.config --reverse_input_channels --output_dir /output/path/to/openvino

I think preprocessed method is defined in the pb file, so there is no need to add means_values and scale_value, right?

The following line is promoted when using this model optimizer command

[The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept]

####

Here is another thing I tried:

I download the pretrained  detection model faster rcnn resnet50 from:

http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_coco_2018_01_28.tar.gz

And model optimizer command :

mo_tf.py --input_model=/path/to/faster_rcnn_resnet50_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config /path/to/faster_rcnn_resnet50_coco_2018_01_28/pipeline.config --reverse_input_channels --output_dir /path/tol/faster_rcnn_resnet50_coco_2018_01_28/openvino --input_shape [1,600,1024,3]

And I use this IR model to the inference, the performance is also bad. The images used are  in the voc training dataset. Why is that? I did not finetune model and just use a official pretrained model, why the accuracy performance is so bad compared to pb model.

The other thing is about Keep Aspect Ratio Resizer. I think this resize method only guarantees that the minimum size of the image should larger than 600 and the maximum size of image should be smaller than 1024, but it does not guarantee that the input size of image is always [600,1024,3], but if we use --input_shape [1,600,1024,3] in the model optimizer command, the IR model we got will have a fixed input size of [600,1024], and when we use this IR model, we have to resize image to [600,1024,3], which is different from the thing that keep aspect ratio resizer dose. I am not sure about this.

Thanks,

songhui zhong

 

0 Kudos
Shubha_R_Intel
Employee
1,910 Views

Dear zhong, songhui

To answer your question:

I think preprocessed method is defined in the pb file, so there is no need to add means_values and scale_value, right?

Nope. this is wrong. How will Model Optimizer know about the pre-processing for that *.pb unless you tell it ? Model Optimizer will not be able to magically guess the pre-processing by parsing the *.pb.

You must actually tell Model Optimizer explicitly what the pre-processing is (with --mean_values, --scale_vales, etc...), in fact that's why those values are provided for you for some Tensorflow models  in the Model Optimizer Convert Model from Tensorflow Doc.

My recommendation is that you use our samples to test your model MO converted model. Please run your model  with object_detection_sample_ssd or object_detection_demo_faster_rcnn.  For Tensofow Object Detection API, there is a lot of pre-processing info built into the pipeline.config itself which comes with the Tensorflow Model. 

Also please read this:

http://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#important_notes_about_feeding_input_images_to_the_samples

Try the OpenVino samples first. Do they work with your models ? You may need to resize your image in order to work with the samples - please review the "feeding_input_images..." doc.

Let me know,

Thanks,

Shubha

0 Kudos
Reply