Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

OpenVINO model optimizer fails for YOLO-tiny ONNX conevrsion

christophebrown
Beginner
784 Views

Hello everyone,

To preface, I am a new grad student working on a research project, so very new to the OpenVINO platform, and doing what I can to learn so that I can use the toolkit to make interesting projects. 

Today I installed the newest version of the toolkit for MacOS (2020.2). My goal was to use the model optimizer for an ONNX implementation of YOLO-tiny Neural Network with hopes of running on an Intel Neural Compute Stick 2. To start, I created a virtual environment on my Anaconda navigator, installed the respective dependencies for the toolkit, and installed it. All files were found and looked to have installed smoothly. 

Fast forward, I downloaded an ONNX model of YOLO-tiny here: https://github.com/onnx/models

I had expected to be able to convert the onnx model to the .xml and .bin intermediate representation files as instructed here: https://docs.openvinotoolkit.org/2020.2/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html

When running on the command line the command below:

python3 mo_onnx.py --input_model <INPUT_MODEL>.onnx

I failed with this output:

(openvino)[redacted]:model_optimizer [redacted]$ python mo_onnx.py --input_model /Users/[redacted]/openvino/tiny-yolov3-11.onnx 
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/Users/[redacted]/openvino/tiny-yolov3-11.onnx
	- Path for generated IR: 	/opt/intel/openvino_2020.2.117/deployment_tools/model_optimizer/.
	- IR output name: 	tiny-yolov3-11
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
ONNX specific parameters:
Model Optimizer version: 	2020.2.0-60-g0bc66e26ff
[ ERROR ]  Cannot infer shapes or values for node "TFNodes/yolo_evaluation_layer_1/Squeeze".
[ ERROR ]  Trying to squeeze dimension not equal to 1 for node "TFNodes/yolo_evaluation_layer_1/Squeeze"
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Squeeze.infer at 0x11d244c10>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "TFNodes/yolo_evaluation_layer_1/Squeeze" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 

 

At this time I haven't been able to interpret the issue or the FAQ (still learning a lot of things at the moment!) But would anybody know why this happens? My ONNX model comes from an official gut hub repository, and I did a full installation of the toolkit. Nothing I've done up to this point is custom, so I don't know if this failure is something within my control. 

Any advice is appreciated. Thank you!

Edit: I am also aware of the guide to convert a YOLO* model to IR format: https://docs.openvinotoolkit.org/2020.2/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html#yolov3-to-ir

In my attempt to follow this guide, I did not find it very helpful as the referenced code is for an old version of Tensorflow (the script crashes for TF 2.0 and later) and my research team's workflow is in PyTorch, which I noticed support for exporting to ONNX format. Unless absolutely necessary, I would like to avoid including this method into the workflow.

0 Kudos
1 Reply
SuryaPSC_Intel
Employee
784 Views

Hi christophebrown,

YOLO-tiny is not one of the supported onnx topologies. Kindly use the DarkNet YOLO models at Converting YOLO* Models to the Intermediate Representation (IR) .

Best Regards,

Surya

0 Kudos
Reply