Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Sharp__Ben
Beginner
264 Views

OpenVINO mask rcnn demo, using the deeplab model?

Hi, I am trying to run the mask-rcnn demo application from the samples of OpenVINO.

I have downloaded the models using the model_downloader.py file.

I then take the:

"models\OpenVino\semantic_segmentation\deeplab\v3\deeplabv3.frozen.pb" file, and run the model optimiser, using:

'mo_tf.py --input_model frozen_inference_graph.pb --output_dir deeplab --input_shape "(1,513,513,3)"'

This creates the xml and bin files, and I run the application, with:

'mask_rcnn_demo.exe -i street.jpg -m frozen_inference_graph.xml'

 

This gives me the error:

InferenceEngine:
        API version ............ 1.6
        Build .................. 22443
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     street.jpg
[ INFO ] Loading plugin

        API version ............ 1.6
        Build .................. 22443
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ ERROR ] Layer detection_output not found in network

 

Why does this model not work? Is the input shape wrong? If so, how can I know what the shape values should be? 

What models can i use with the mask rcnn application?

Thanks!

 

0 Kudos
11 Replies
Shubha_R_Intel
Employee
264 Views

Dearest Sharp, Ben,

I reproduced your issue. This is definitely a bug. Sorry for the inconvenience this has caused !  mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28 definitely works - I just tested it. I know it's not deeplab but it does the job.

Please select the mask_rcnn_inception... model from the below list of supported and validated models:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_T...

I will file a bug on your behalf Ben.

Thanks,

Shubha

 

Sharp__Ben
Beginner
264 Views

Hi, thank you for getting back to me. Do i need to pass ` --input_shape` to this model? what is the correct command for the mo.py here?

 

thanks,

 

Ben.

Shubha_R_Intel
Employee
264 Views

Dearest Sharp, Ben,

I actually found out that I'm likely wrong about what I said earlier (it's a bug). Actually, if you do a -h to mask_rcnn_demo.exe you will see this option:

 -detection_output_name "<string>" Optional. The name of detection output layer. Default value is "detection_output"

For deeplab you need to put the detection_output_name (layer name) for deeplab.  

Layer detection_output not found in network

means that detection_output is the layer name for a mask_rcnn model (which is default for mask_rcnn_demo.exe) but for deeplab, the output is something different.

Next question is - well how do I know what the output layer name is for deeplab ? Well, you can :

1) dump the frozen model to a text version (this is very easy to do)

2) https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms (easy but it takes a million years to compile tensorflow from scratch and get the summarize_graph.exe)

3) tensorboard - unless deeplab has tensorboard API calls built into the model, this will be more difficult.

Thanks and I apologize for misleading you earlier -

Shubha

 

 

Sharp__Ben
Beginner
264 Views

Ah I see. Thank you for your reply. Just so I am 100% clear, can you please let me know the exact code I need to run to get the mask rcnn demo working?

 

What do i pass to mo.py? Which model should I use? What should I pass to the exe?

 

Thanks!

Shubha_R_Intel
Employee
264 Views

Dear Ben,

Your commands above were just fine. Your MO command was correct. You created IR successfully, so there is no problem. The issue is that you must run mask_rcnn_demo.exe with the  -detection_output_name switch. For that, you must find out the output layer name for deeplab (I don't know the answer to this either). But you can find out the output name by googling deeplab - perhaps someone has figured it out. But the fool-proof way is to write a little code and dump the frozen.pb into text format, which you can just simply read and from that, learn the detection output layer name. 

There are plenty of examples on the web for dumping a frozen.pb to text.  But here's a little code snippet (this is Tensorflow code, has nothing to do with OpenVino):

    # We load the protobuf file from the disk and parse it to retrieve the
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we import the graph_def into a new Graph and return it
    with tf.Graph().as_default() as graph:
        # The name var will prefix every op/nodes in your graph
        # Since we load everything in a new graph, this is not needed
        tf.import_graph_def(graph_def, name="prefix")
    return graph


if __name__ == '__main__':
    mygraph = load_graph("C:\\Users\\booboo\\PycharmProjects\\ForumStuff\\saved_model.pb")
    tf.train.write_graph(mygraph, "./", "saved_model.txt")

 

As I mentioned though, the above is not the only way. You can also use the Tensorflow summarize_graph.exe tool or TensorBoard to get the same info.

Thanks,

Shubha

264 Views

Hi Ben and Shubha,

You are right, the Model Optimizer tool has successfully generated IR. But before running the demo please just open the generated IR (it's just an .xml file) and check the name attribute of the Detection Output layer. Then, as Shubha has already commented please use exactly this name with  -detection_output_name CLI option of the demo. It should work.

Thanks.

Shubha_R_Intel
Employee
264 Views

Oops Dear Ben,

I am silly. You don't have to go through the laborious exercises i gave you since you already generated IR (call it temporary insanity). As Matveichev, Viacheslav pointed out, just read your IR and get the -detection_output_name. 

Shubha

Shubha_R_Intel
Employee
264 Views

Dear Ben,

Hi again. Another thing is, for deeplab you should be using the segmentation_demo (either C++ or Python version), not the mask_rcnn_demo. Unfortunately, there seems to be a bug in the segmentation_demo with regard to the deeplab IR. It works fine on semantic-segmentation-adas-0001 however.

Thanks,

Shubha

Sharp__Ben
Beginner
264 Views

Hi guys, #I have attached the xml, i do not see anything about the Detection Output layer.. Could you please take a look? (made from the openvino deeplab .pb).

 

thank!

264 Views

Hi Ben,

We have discussed again and found the root cause. Let me recap here:

  1. As we have already discussed this deeplab model is not MaskRCNN model which doesn't include Detection Output layer. So, this model must be verified by usual segmentation_demo not by mask_rcnn_demo.
  2. Segmentation_demo handles this model successfully (inference is successfully completed). But we have found that there is a crash during postprocessing phase execution. Because the demo is not adjusted yet to handle ArgMax/Squeeze output layer with 3 dimensions  (like it's done in the deeplab model) and it's initially designed to handle only usual ArgMax layer with 4 dimensions (e.g. you can download semantic-segmentation-adas or other pre-trained model using the model-downloader tool and check).

So, you can try to remove last 2 layers from your generated IR and try to run usual segmentation_demo with it t check that it's working. Or, if needed we will send a patch to be applied for C++/Python segmentation_demos manually and you will be able to infer initial model (untill 2019 R2 release is available, we will include the fix here).

Thanks.

Shubha_R_Intel
Employee
264 Views

Dearest Ben, 

As Slava explained, yes, we did find a bug in the segmentation_demo. But here is a quick patch you can make to the main.cpp within C:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\inference_engine\samples\segmentation_demo\main.cpp. After making these changes, please recompile by running build_samples_msvc2017.bat. Of course if you are using Linux perform the Linux version of this. 

 

main.cpp:240

size_t N = output_blob->getTensorDesc().getDims().at(0);
size_t C, H, W;
if (output_blob->getTensorDesc().getDims().size() == 3){
    C = 1;
    H = output_blob->getTensorDesc().getDims().at(1);
    W = output_blob->getTensorDesc().getDims().at(2);
}
else{
    C = output_blob->getTensorDesc().getDims().at(1);
    H = output_blob->getTensorDesc().getDims().at(2);
    W = output_blob->getTensorDesc().getDims().at(3);
}

 

This should work for you Ben, and sorry for the inconvenience !

Shubha