Community
cancel
Showing results for 
Search instead for 
Did you mean: 
mred
Beginner
258 Views

NCS2 SSD Mobilenet v2 returns zeros

Hi,

I'm trying to use the NCS2 with SSD Mobilenet v2 to detect objects. My problem is that when I use the converted model for detection, all I get is a DetectionOuput with shape [1,1,100,7] that consists of only zeros, except the first element which is -1. I get the same result for different example images.

I have tried 2 different models, both with the same result. I got the models from this page: https://software.intel.com/en-us/forums/computer-vision?page=1

I used "SSD MobileNet V2 COCO" and "SSD Lite MobileNet V2 COCO". Since they are from the supported model zoo, I think the base models are not the problem.

 

This is how I convert the TensorFlow model to the IR:

python3 mo.py 
 --input_shape [1,300,300,3] 
 --tensorflow_use_custom_operations_config ../../../model_optimizer/extensions/front/tf/ssd_v2_support.json 
 --tensorflow_object_detection_api_pipeline_config object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config 
 --data_type FP16 
 --input_model object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb 

 

This is the output I get when running the conversion command above:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
	- Path for generated IR: 	/opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/.
	- IR output name: 	frozen_inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	detection_boxes,detection_scores,num_detections
	- Input shapes: 	[1,300,300,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/../../../model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version: 	2019.2.0-436-gf5827d4
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /opt/intel/openvino_2019.2.242/deployment_tools/open_model_zoo/tools/downloader/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 27.89 seconds. 

 

And this is a minimal code sample that reproduces the problem:

import cv2
import numpy as np
from openvino.inference_engine import IENetwork, IECore

model_xml = "./ssdlite_mobilenet_v2_coco_2018_05_09/mobilenetv2.xml"
model_bin = "./ssdlite_mobilenet_v2_coco_2018_05_09/mobilenetv2.bin"

ie = IECore()
net = IENetwork(model=model_xml, weights=model_bin)

input_blob = next(iter(net.inputs))
output_blob = next(iter(net.outputs))
net.batch_size = 1
print(net.outputs['DetectionOutput'].shape)

n, c, h, w = net.inputs[input_blob].shape
print(n, c, h, w)

image = cv2.imread("dog.jpg")
img = cv2.resize(image, (w,h))

exec_net = ie.load_network(network=net, device_name="MYRIAD")
res = exec_net.infer(inputs={input_blob: np.reshape(img, [n,c,w,h])})
print(res)

 

There is probably something wrong with the mo.py command or I'm not using the python API correctly. Any pointers would be appreciated.

Thanks

0 Kudos
4 Replies
Shubha_R_Intel
Employee
258 Views

Dear mred,

Please try one of our SSD samples, like the Python object_detection_demo_ssd_async under demos ? Does it produce proper output ? Documentation for this demo sample is here.

Thanks,

Shubha

 

mred
Beginner
258 Views

Hello Shubha R.,

thank you for making me aware of the python demos, I didn't know about them before. Yes, the python demo produces proper output and while taking a look at the demo code, I found my error.

The demo preprocesses the frame like this:

in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1))  # Change data layout from HWC to CHW
in_frame = in_frame.reshape((n, c, h, w))

In my own code, I did the resizing and reshaping, but not the transposing. After adding the line that changes the layout to CHW, the object detection is working now!

Thank you,

mred

Shubha_R_Intel
Employee
258 Views

Dear mred,

I'm thrilled that the OpenVino demo helped to fix your mistake.

Thanks for using OpenVino !

Shubha

es__we
Beginner
258 Views

Shubha R. (Intel) wrote:

Dear mred,

I'm thrilled that the OpenVino demo helped to fix your mistake.

Thanks for using OpenVino !

Shubha

I'm confused, why could he get the right result although he  didn't set the "--reverse_input_channels" option.  When should I add this option? By the way, Must I set the "input_shape" option? It seems that the model works well although I did not do that.

Reply