Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Kaiser__Timo
Beginner
359 Views

Cannot infer Faster RCNN on MYRIAD

Hi all together,

i just created a "Faster R-CNN ResNet 50" IR with OpenVINO computer_vision_sdk_2018.3.343 as shown in your documentation: ./deployment_tools/documentation/docs/TensorFlowObjectDetectionFasterRCNN.html

The frozen inference graph was downloaded from the link shown in your Tutorial: faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz

The conversion is done with the command shown in your tutorial too, modified with --data_type FP16. I used FP16 because MYRIAD only supports FP16. The command i used is this:

~/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer$ ./mo.py --input_model=~/Desktop/faster_rcnn/frozen_inference_graph.pb --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config extensions/front/tf/legacy_faster_rcnn_support.json --output_dir=~/Desktop/faster_rcnn/IR --model_name frozen_inference_graph_FP16 --data_type FP16

The full output of the Conversion is this:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/up-board/Desktop/faster_rcnn/frozen_inference_graph.pb
	- Path for generated IR: 	/home/up-board/Desktop/faster_rcnn/IR
	- IR output name: 	frozen_inference_graph_FP16
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	detection_boxes,detection_scores,num_detections
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/legacy_faster_rcnn_support.json
Model Optimizer version: 	1.2.185.5335e231
WARNING: the "PreprocessorReplacement" is a legacy replacer that will be removed in the future release. Please, consider using replacers defined in the "extensions/front/tf/ObjectDetectionAPI.py"
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
WARNING: the "TFObjectDetectionAPIFasterRCNNProposalAndROIPooling" is a legacy replacer that will be removed in the future release. Please, consider using replacers defined in the "extensions/front/tf/ObjectDetectionAPI.py"
WARNING: the "SecondStagePostprocessorReplacement" is a legacy replacer that will be removed in the future release. Please, consider using replacers defined in the "extensions/front/tf/ObjectDetectionAPI.py"
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the Inference Engine documentation for information about this layer.
The "object_detection_sample_ssd" sample can be used to run the generated model.
/home/up-board/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/common/partial_infer/slice.py:90: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  value = value[slice_idx]

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/up-board/Desktop/faster_rcnn/IR/frozen_inference_graph_FP16.xml
[ SUCCESS ] BIN file: /home/up-board/Desktop/faster_rcnn/IR/frozen_inference_graph_FP16.bin
[ SUCCESS ] Total execution time: 124.86 seconds. 

 

In a python script i load the network into PlugIn "GPU" and its working well. If i change the PlugIn to "MYRIAD" the following Exception is thrown:

Traceback (most recent call last):
  File "~/InferenceEngine.py", line 43, in load_network
    self.exec_network = self.plugin.load(network=self.network)
  File "ie_api.pyx", line 237, in inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 249, in inference_engine.ie_api.IEPlugin.load
RuntimeError: [VPU] Reshape input or output reshape_4d_ has invalid batch
/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/ie_bridges/python/inference_engine/ie_api_impl.cpp:226

Does someone know whats happening? Is MYRIAD not supporting the models shown in the tutorials?

Greetings,

Timo

0 Kudos
6 Replies
Severine_H_Intel
Employee
359 Views

Dear Timo, 

indeed Myriad does not support TF Faster RCNN. It is actually indicated in our documentation: https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-2-1 , but we definitely can make it clearer in our tutorial. 

Best, 

Severine

RDeBo
Novice
359 Views

It's now supported, right ? 

Shubha_R_Intel
Employee
359 Views

Dear De Boer, Ronald,

Yes today Faster R-CNN is supported on MYRIAD. Please see Object Detection Demo sample doc . Please update to the latest release 2019R1.1.

Thanks,

Shubha

Dutta__Jeet
Beginner
359 Views

Hi Shubha, I tried to write and run a Faster RCNN python script but all I get is backgrounds as predictions. Is there something special we need to do in order to get normal output predictions?
Shubha_R_Intel
Employee
359 Views

Dear Dutta, Jeet,

Kindly study the faster rcnn C++ sample  for guidance on how you should write your code.

Thanks,

Shubha

 

Dutta__Jeet
Beginner
359 Views

Hi Subha, I used a python interface to do it. I will however give a look into the C++ sample for better results but I assume it is the same as all other networks. The tensorflow object detection api samples works fine.
Reply