Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Nguyen__Viet
Beginner
135 Views

Mask R-CNN on VPU

Hi, 

We have trained a Mask R-CNN model on a NVIDIA GPU to do object instance segmentation and tested on some images with sufficient performance. Now we are looking into deploy the trained model on Neural Compute Stick 2. I'm just getting started with OpenVINO toolkit and here is what I have done: 

- I downloaded mask_rcnn_inception_v2_coco.tar.gz from TensorFlow detection model zoo and decompressed it. 

- I used ModelOptimizer as follows to get an Intermediate Representation:

python3 mo_tf.py --input_model ./frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config ./pipeline.config --data_type FP16

(I used data type of FP16 as the default FP32 is not supported on VPU)

- Then I used Inference Engine in the mask_rcnn_demo as follows: 

./mask_rcnn_demo -m ./frozen_graph.xml -i ./image.jpg -d MYRIAD

However, I got the following error: 

[ ERROR ] [VPU] Softmax input or output SecondStageBoxPredictor/ClassPredictor/BiasAdd/softmax has invalid batch 

Could someone please point me to the source of this error? 

I understand from the documentation that currently Mask RCNN is only supported on CPU and GPU, but I would like to know is there anything I can do to get it run on VPU (such as write custom layers for layers not supported in Model Optimizer?). I haven't found any explanation on why Mask RCNN is not supported on VPU in the documentation yet. 

Thanks, 

Viet. 

0 Kudos
6 Replies
Nguyen__Viet
Beginner
135 Views

I'm continuing to find debug and find the problem. 

- I added some simple logging code into mask_rcnn_demo code (main.c) and recompile the samples to find out which function in the code produces that error. Seems like the error comes from the line: 

auto executable_network = plugin.LoadNetwork(network, {});

I looked up the API and see that this is a function belongs to IInferencePlugin class. I have no access to the function code, so I'm stuck here. 

- I tried to generate two versions of IR, one for FP32 and one for FP16, from the Model Optimizer, and compared the two frozen_inference_graph.xml files. They look the same for the layer "SecondStageBoxPredictor/ClassPredictor/BiasAdd/softmax", except for FP32 and FP16. The FP32 IR works fine with Mask RCNN run on CPU, but the FP16 IR produces the above error when run with VPU. It's strange... 

Please point me to any source of this error. 

Thanks, 

Viet. 

huang__zeyu
Beginner
135 Views

I am having the same problem, and i find no way to go!

nikos1
Valued Contributor I
135 Views

It seems that Mask R-CNNs are supported on CPU only and with batch size 1. 

https://software.intel.com/en-us/articles/OpenVINO-RelNotes

 

huang__zeyu
Beginner
135 Views

nikos wrote:

It seems that Mask R-CNNs are supported on CPU only and with batch size 1. 

https://software.intel.com/en-us/articles/OpenVINO-RelNotes

 

 

If so, faster-rcnn can not be used as well. That's sad!

nikos1
Valued Contributor I
135 Views

> If so, faster-rcnn can not be used as well. 

Have not tried yet but it seems this would be the case for Faster R-CNN too based on

Faster R-CNNs and Mask R-CNNs are supported on CPU only and with batch size 1. 

Truong__Dien_Hoa
New Contributor II
135 Views

Hi, any news from this topic ? I'm new here and I found in this https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-2-1 Mask RCNN support not only CPU but also GPU right ?

NOTE: Faster and Mask R-CNN models are supported on CPU and GPU only with batch size equal to 1.

p/s: Does someone make it work with python ? I'm working on python but find that openvino for python is quite limited and not in the stable released version. Thank you in advance