Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6552 Discussions

Error using trying to use IR model after converting Tensorflow pb file using faster_rcnn_inception_v2_pets

Menzies__Luke
Beginner
1,256 Views

Hi, 

I am having a problem trying to run the optimized inference model after converting a Tensorflow inference graph for a faster_rcnn_inception_v2_pets model. I have tested this model using Tensorflow and it works fine, unfortunately it is slow and I was hoping to increase the speed. 

I produced the xml and bin files using the following command:

"

python C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo.py --input_model BinDetectionModel\BinDetectionModel.pb --tensorflow_object_detection_api_pipeline_config faster_rcnn_inception_v2_pets.config --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.7.json

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Users\luke.menzies\Documents\TruckVideoData\BinDetectionModel\BinDetectionModel.pb
        - Path for generated IR:        C:\Users\luke.menzies\Documents\TruckVideoData\.
        - IR output name:       BinDetectionModel
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Offload unsupported operations:       False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Users\luke.menzies\Documents\TruckVideoData\faster_rcnn_inception_v2_pets.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.7.json
Model Optimizer version:        1.5.12.49d067a0
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Users\luke.menzies\Documents\TruckVideoData\.\BinDetectionModel.xml
[ SUCCESS ] BIN file: C:\Users\luke.menzies\Documents\TruckVideoData\.\BinDetectionModel.bin
[ SUCCESS ] Total execution time: 45.49 seconds.

"

Once this was done I tried to run test the model. I followed an example script I found. Showing this snippet, I got the following error:

"

import sys
from PIL import Image
import numpy as np
from openvino.inference_engine import IENetwork, IEPlugin
 

 

plugin = IEPlugin("CPU", plugin_dirs=None)

# Read IR
model_xml=r'C:\Users\luke.menzies\Documents\TruckVideoData\BinDetectionModel.xml'
model_bin=r'C:\Users\luke.menzies\Documents\TruckVideoData\BinDetectionModel.bin'
net = IENetwork.from_ir(model=model_xml, weights=model_bin)
assert (len(net.inputs.keys()) == 1)
assert (len(net.outputs) == 1)
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Load network to the plugin
exec_net = plugin.load(network=net)

....

"

     44 out_blob = next(iter(net.outputs))
     45 # Load network to the plugin
---> 46 exec_net = plugin.load(network=net)
     47 del net
     48 # Run inference

ie_api.pyx in openvino.inference_engine.ie_api.IEPlugin.load()

ie_api.pyx in openvino.inference_engine.ie_api.IEPlugin.load()

RuntimeError: Unsupported primitive of type: Proposal name: proposals

 

From looking at the forum there is supposed to be a page explaining Tensorflow faster rcnn models in the documentation 

./deployment_tools/documentation/docs/TensorFlowObjectDetectionFasterRCNN.html

However, I cannot find it in the version I have. If there is anyone that can help or point me in the right direction, I would be most grateful.

 

Thanks

 

Luke

0 Kudos
2 Replies
Shubha_R_Intel
Employee
1,256 Views

Hello Luke. That particular document you are referring to is deprecated. Please instead look at this document:

https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow

And using the side bars, navigate to Convert TensorFlow* Object Detection API Models

I noticed that you passed in --tensorflow_object_detection_api_pipeline_config . Are you re-training this model or using the pre-trained model as is ? I assume you got your config from here  ?

https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/faster_rcnn_inception_v2_pets.config

I found a similar issue to yours here. Specifically  the instruction is to pass --cpu_extension pointing to the location of cpu_extension.dll [for Windows]. You need to properly build your Inference Engine Samples to create that shared library. Remember - build the Release version !

https://software.intel.com/en-us/forums/computer-vision/topic/780773

Thanks for your patience Luke and let me know if passing --cpu_extension on your python script command line helps.

Shubha

0 Kudos
Menzies__Luke
Beginner
1,256 Views

Thanks Shubha, 

I used the release version of cpu_extension:

'C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\bin\intel64\Release\cpu_extension_avx2.dll'

And that seemed to work. I am now onto the second part, getting the results to display using cv2 library. 

To answer your question, I have re-trained the model for my custom model.

Thanks again

Luke

0 Kudos
Reply