Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Tensorflow ssd_mobilenet_v1 converted but doesn't work

OM__Balaji
Beginner
1,212 Views

I tried to optimize a custom trained ssd_mobilenet_v1 model and It was done. If i run this command

python mo_tf.py --input_model ~/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections" --data_type FP32

 

i get the output like this

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/desktop-obs-58/projects/models/vest_v5_model/frozen_inference_graph.pb
	- Path for generated IR: 	/home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/.
	- IR output name: 	frozen_inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	detection_boxes,detection_scores,num_detections
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json
Model Optimizer version: 	1.2.110.59f62983
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
/home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/mo/front/common/partial_infer/slice.py:88: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  value = value[slice_idx]

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 8.55 seconds.

 

And if i run the model i get this error

python object_detection_demo_ssd_async.py --model ../../../model_optimizer/tf_converted_models/vest_v5/frozen_inference_graph.xml --cpu_extension /home/desktop-obs-58/intel/computer_vision_sdk_2018.2.319/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so --input ~/Downloads/inf.mp4

Initializing plugin for CPU device...
Reading IR...
Loading IR to the plugin...
Traceback (most recent call last):
  File "object_detection_demo_ssd_async.py", line 164, in <module>
    sys.exit(main() or 0)
  File "object_detection_demo_ssd_async.py", line 66, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 167, in inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 179, in inference_engine.ie_api.IEPlugin.load
RuntimeError: Dimentions of input layers are not equal for FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/moments/SquaredDifference/add_
/teamcity/work/scoring_engine_build/releases/cvsdk-2018-r2/ie_bridges/python/inference_engine/ie_api_impl.cpp:155

 

The thing is ssd_mobilenet_v1_coco trained model works in the exact same process.

0 Kudos
7 Replies
Severine_H_Intel
Employee
1,212 Views

Hi Balaji, 

can you use your model with the C++ sample: object_detection_demo_ssd_async ? Is it working with this sample?

Are you using Python 2.7 or Python 3? 

Best, 

Severine

0 Kudos
OM__Balaji
Beginner
1,212 Views

Hi Severine,

I have not tried with c++ samples yet and i'm using Python 3.

Thank you,

Balaji

 

0 Kudos
OM__Balaji
Beginner
1,212 Views

Hi Severine,

I tried with C++ wrapper code, Same error

./object_detection_demo_ssd_async -m ~/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/vest_v5_conv/vest_v5.xml -i ~/Downloads/inf.mp4 
InferenceEngine: 
	API version ............ 1.1
	Build .................. 12419
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin

	API version ............ 1.1
	Build .................. lnx_20180510
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the sample expects
[ INFO ] Checking that the outputs are as the sample expects
[ INFO ] Loading model to the plugin
[ ERROR ] Dimentions of input layers are not equal for FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/moments/SquaredDifference/add_

 

 

Thanks,

Balaji

0 Kudos
Severine_H_Intel
Employee
1,212 Views

Hi Balaji, 

can you send me your model via PM ? I would like to study it. Where did you trained it from?

Best, 

Severine

0 Kudos
OM__Balaji
Beginner
1,212 Views

Hi Severine,

   Let me provide you the information in detail, We trained our model by following this tutorial https://pythonprogramming.net/custom-objects-tracking-tensorflow-object-detection-api-tutorial/. Also we have his model which can detect mac and cheese and we converted it to IR model(I provided the model in this post), That model is working well in both CPU and MYRIAD devices. We followed the same instructions that he provided in that blog and we have created the model which can detect safety vest, the model is working fine with normal tf inference code.We used this http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz finetuned checkpoint to start our training (as mentioned in that blog). And when i try to convert it, It asked me to mention the "preprocessed_image_width" and "preprocessed_image_height" in the config file, After I did that, it gets converted with a success message. But this IR model throws this error 

python object_detection_demo_ssd_async.py --model PATH/frozen_inference_graph.xml --input PATH/inf.mp4 -d CPU --cpu_extension PATH/libcpu_extension_avx2.so 
Initializing plugin for CPU device...
Reading IR...
Loading IR to the plugin...
Traceback (most recent call last):
  File "object_detection_demo_ssd_async.py", line 164, in 
    sys.exit(main() or 0)
  File "object_detection_demo_ssd_async.py", line 66, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 167, in inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 179, in inference_engine.ie_api.IEPlugin.load
RuntimeError: Dimentions of input layers are not equal for FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/moments/SquaredDifference/add_
/teamcity/work/scoring_engine_build/releases/cvsdk-2018-r2/ie_bridges/python/inference_engine/ie_api_impl.cpp:155

Here this is the link to the  macncheese detector and Vest detector IR models:

1. Maccheese:  https://optisolbusinessindia-my.sharepoint.com/:u:/g/personal/balaji_om_optisolbusiness_com/EWajCKxW7_9HvpcLkxgvlzoB3D1F7ggvCydZXkzpr7cqkQ?e=XJqGlQ

2. Vest:  https://optisolbusinessindia-my.sharepoint.com/:u:/g/personal/balaji_om_optisolbusiness_com/EWhC6IjashtDi2aO9fJe2NsBPOwao1MCKDHNuSu-qrCDbQ?e=fenfxV

This is my ssd_support.json config file.

[
    {
        "custom_attributes": {
            "preprocessed_image_width":300,
            "preprocessed_image_height":300
        },
        "id": "PreprocessorReplacement",
        "inputs": [
            [
                {
                    "node": "map/Shape$",
                    "port": 0
                },
                {
                    "node": "map/TensorArrayUnstack/Shape$",
                    "port": 0
                },
                {
                    "node": "map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3$",
                    "port": 2
                }
            ]
        ],
        "instances": [
            ".*Preprocessor/"
        ],
        "match_kind": "scope",
        "outputs": [
            {
                "node": "sub$",
                "port": 0
            },
            {
                "node": "map/TensorArrayStack_1/TensorArrayGatherV3$",
                "port": 0
            }
        ]
    },
    {
        "custom_attributes": {
            "code_type": "caffe.PriorBoxParameter.CENTER_SIZE",
            "confidence_threshold": 0.01,
            "keep_top_k": 200,
            "nms_threshold": 0.6,
            "pad_mode": "caffe.ResizeParameter.CONSTANT",
            "resize_mode": "caffe.ResizeParameter.WARP"
        },
        "id": "TFObjectDetectionAPIDetectionOutput",
        "include_inputs_to_sub_graph": true,
        "include_outputs_to_sub_graph": true,
        "instances": {
            "end_points": [
                "detection_boxes",
                "detection_scores",
                "num_detections"
            ],
            "start_points": [
                "Postprocessor/Shape",
                "Postprocessor/Slice",
                "Postprocessor/ExpandDims",
                "Postprocessor/Reshape_1",
                "Postprocessor/ToFloat"
            ]
        },
        "match_kind": "points"
    }
]

I don't know how to PM you, please provide me the instructions.

Thanks,

Balaji

0 Kudos
OM__Balaji1
Beginner
1,212 Views

Hi,

  I trained my dataset using mobilenet ssd v2 using tensorflow and its working fine.

Thanks,

Balaji

0 Kudos
Zhou__KF
Beginner
1,212 Views

Hi,

 I tried python mo_tf.py --input_model ~/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections" --data_type FP32

And worked. But i'm wondering, can I merge label string into the model? I test the pre-trained model and I saw there's label string inside.

How can I leverage mscoco_label_map.pbtxt in mo.py time? or it must happend in run time?

 

0 Kudos
Reply