Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

mask_rcnn model of openvino has inconsistent detection result with the original tf model

pig__dudu
Beginner
866 Views

Hi,

I convert a frozen model of tensorflow object detection api with the following code . model url

python3 /opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/mo.py \
        --framework=tf \
        --input_model /root/tf_frozen_models/mask_rcnn_resnet50/frozen_inference_graph.pb \
        --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json \
        --tensorflow_object_detection_api_pipeline_config /root/tf_frozen_models/mask_rcnn_resnet50/pipeline.config \
        --data_type FP32 \
        --reverse_input_channels \
        --input_shape=[1,800,800,3] \
        --input=image_tensor \

the ouput of the command is :

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /root/tf_frozen_models/mask_rcnn_resnet50/frozen_inference_graph.pb
        - Path for generated IR:        /root/.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         image_tensor
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,800,800,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  /root/tf_frozen_models/mask_rcnn_resnet50/pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  /opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json
Model Optimizer version:        2019.1.0-341-gc9b66a2
[ WARNING ]  
Detected not satisfied dependencies:
        test-generator: installed: 0.1.2, required: 0.1.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2019.1.133/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
The predicted masks are produced by the "masks" layer for each bounding box generated with a "detection_output" layer.
 Refer to IR catalogue in the documentation for information about the DetectionOutput layer and Inference Engine documentation about output data interpretation.
The topology can be inferred using dedicated demo "mask_rcnn_demo".

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /root/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /root/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 63.54 seconds. 

 

then I load the IR model in mask_rcnn_demo with following code

/opt/intel/openvino_2019.1.133/deployment_tools/inference_engine/samples/build/intel64/Release/mask_rcnn_demo -i /root/000002.jpg -m /root/mask_rcnn_openvino/frozen_inference_graph.xml -d CPU

the output of the command is :
 

InferenceEngine: 
        API version ............ 1.6
        Build .................. custom_releases/2019/R1_ebce9728578ef3131f2f282b3fbc3232109c598e
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /root/000001.jpg
[ INFO ] Loading plugin

        API version ............ 1.6
        Build .................. 23224
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Network batch size is 1
[ INFO ] Prepare image /root/000001.jpg
[ WARNING ] Image is resized from (1920, 1080) to (800, 800)
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Setting input data to the blobs
[ INFO ] Start inference (1 iterations)

Average running time of one iteration: 4759.61 ms

[ INFO ] Processing output blobs
[ INFO ] Detected class 1 with probability 0.996633 from batch 0: [1071.66, 315.281], [1402.84, 738.225]
[ INFO ] Detected class 9 with probability 0.351955 from batch 0: [75.8102, 77.0395], [1740.76, 658.419]
[ INFO ] Detected class 15 with probability 0.358935 from batch 0: [157.744, 608.682], [429.489, 790.67]
[ INFO ] Image out0.png created!
[ INFO ] Execution successful

the detection result is following

openvino.png

and the result of tensorflow object detection API is :

tf.jpg

The IR model find one person. The tf model find two person. I do not know what had happeded

 

0 Kudos
5 Replies
Shubha_R_Intel
Employee
866 Views

Dear pig, dudu 

openvino_2019.1.133 is an old version of OpenVino. We are now on *.144 (or *.148 if you are on Windows). It looks like you are trying to run the mask_rcnn_demo on FP32 which is good because it's broken on MYRIAD in OpenVino 2019R1.1.

Can you download the latest OpenVino release and try again ?

Thanks,

Shubha

 

0 Kudos
pig__dudu
Beginner
866 Views

Hi,

I have tried it on  *.144. But there is no difference.

0 Kudos
Shubha_R_Intel
Employee
866 Views

Dearest pig, dudu,

Thank you for your patience. Can you attach the original image here and I will try myself. Most likely I will file a bug because it really should work.

Sincerely,

Shubha

0 Kudos
pig__dudu
Beginner
866 Views

ok

0 Kudos
Shubha_R_Intel
Employee
866 Views

Dearest pig, dudu,

I tried in a future version of OpenVino 2019R2 (which has not been released yet) and it works fine. Please see the attached image. 

Here is the model optimizer command I used:

python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.177\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.177\deployment_tools\model_optimizer\extensions\front\tf\mask_rcnn_support.json" --tensorflow_object_detection_api_pipeline_config pipeline.config

I used this model mask_rcnn_resnet101_atrous_coco_2018_01_28 available from Model Optimizer Tensorflow Models

If you are using the same model and same commands in OpenVino 2019R1.1 it must be a bug which has been fixed in the future release. Please see attached output jpg image.

Hope it helps,

Thanks,

Shubhaout0-small.png

0 Kudos
Reply