- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I download the newest OpenVINO SDK and found the ”mask_rcnn_demo" sample in ”inference_engine\samples" folder,but where to download the trained model for the sample project?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
You may have to use a pretrained model from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md and generate your own IR using the model optimizer command similar to
python3 mo.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config --data_type=FP32
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hi,
You may have to use a pretrained model from https://github.com/tensorflow/models/blob/master/research/object_detecti... and generate your own IR using the model optimizer command similar to
python3 mo.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config --data_type=FP32
nikos
Thanks, nikos!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hi,
You may have to use a pretrained model from https://github.com/tensorflow/models/blob/master/research/object_detecti... and generate your own IR using the model optimizer command similar to
python3 mo.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config --data_type=FP32
nikos
Hi,nikos:
I download some mask_rcnn models and I test them, but why the speed is so slow? I test the smallest model "mask_rcnn_inception_v2"(converted to FP16 data type) with a 600x800 size image on GPU device, it consume about 800ms,the time is too long! Is there any optimization to reduce the inference time?
The computer I do test is HP ENVY13 notebook with UHD620 GPU;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Yes, it is typical speed for the UHD620 for this kind of network. It depends on your application. A few ways to address this may be :
- Try on a fast CPU (-d CPU) with FP32
- Try on a faster GPU ( UHD620 does not have so many EUs - other Intel HD GPUs run much faster )
- Try on a different inference device (not CPU or GPU).
- Try async on more than one devices.
- Try with the -pc argument study the ms per layer and edit your network - possibly retrain.
- Try a different less deep network architecture.
With kind regards,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I'm facing on a problem converting mask rcnn tensorflow model. I use the mask_rcnn_inception_v2 too but when I ran:
python3 mo.py --input_model /home/gpuserver/mask_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support_api_v1.7.json --tensorflow_object_detection_api_pipeline_config /home/gpuserver/mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config --data_type=FP32 --output_dir ~
I encounter an error:
Common parameters:
- Path to the Input Model: /home/gpuserver/mask_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
- Path for generated IR: /home/gpuserver
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /home/gpuserver/mask_rcnn_inception_v2_coco_2018_01_28/pipeline.config
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support_api_v1.7.json
Model Optimizer version: 1.5.12.49d067a0
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation
file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (800, 800).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): The matched sub-graph contains network input node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #75.
I used tensorflow 1.10
Can someone help me on this problem ? Thank you so much in advance
Regards,
Hoa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry. I solved my problem. It is because I used the wrong version of mask_rcnn_support.json. Thank you

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page