Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 Discussions

Model Optimizer: [ ERROR ] The ExpandDims node Unsqueeze_XXX has more than 1 input

TonyWong
Beginner
5,233 Views

Hi, 

what I'm doing is to : change the onnx format to IR.

My Command:

python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model "model.onnx" --output_dir  "cur_folder/"

 

And here is the brief error:

[ ERROR ] The ExpandDims node Unsqueeze_583 has more than 1 input
[ ERROR ] Cannot infer shapes or values for node "Slice_4".
[ ERROR ] Output shape: [0 3 0 0] of node "Slice_4" contains non-positive values

[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Slice.infer at 0x7fda43996f70>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

 

Please see the attachment for more details.

0 Kudos
1 Solution
IntelSupport
Community Manager
5,119 Views

Hi TonyWong,

I have converted your model into Intermediate Representation (IR) successfully. For your yolov5 model, there are 3 output nodes. You can use Netron to visualize the Yolov5 ONNX weights and get the information. Use the command below to get the IR files.

 

Model Optimizer command: python mo.py --input_model model.onnx --output Conv_410,Conv_322,Conv_498 --input_shape [1,3,512,512]

 

I attached the screenshot of the conversion below.

 

Regards,

Aznie

 

View solution in original post

0 Kudos
13 Replies
IntelSupport
Community Manager
5,208 Views

 

Hi TonyWong,

 

Thanks for reaching out.

From the error message, it shows that Model Optimizer cannot infer shapes or values for the specified node. It can happen because of a bug in the custom shape infer function, because the node inputs have incorrect values/shapes, or because the input shapes are incorrect. You have to specify the input shapes of your model.

 

You can share your model for us to further investigate this. Meanwhile, please refer to this parameters information from Converting a ONNX* Model for a specific model type.

 

Regards,

Aznie


0 Kudos
TonyWong
Beginner
5,193 Views

Aznie,

Many thanks for your reply.

I've read the link you shared. 

And also, I've tried to specify the shape.  (though the only requirement is 3 color channel.)

 

Here is the input shape that you can find on netron . float32[batch, 3, height, width]

I've tried to specify Input shape, with no luck. 

 

As the model is 95M.  Here is the google drive link.

Many thanks for your kind reply and help.

 

Tony

0 Kudos
TonyWong
Beginner
5,192 Views

Oh, one thing I want to mention.

that the input dimension, unlike all existing samples, does not specify the batch, height, and width for the image.

Thus I do not know how to specify these values.

 

0 Kudos
TonyWong
Beginner
5,132 Views

Something like the dynamic input shape.  (and it looks that openVino does not support it yet.)

0 Kudos
IntelSupport
Community Manager
5,120 Views

Hi TonyWong,

I have converted your model into Intermediate Representation (IR) successfully. For your yolov5 model, there are 3 output nodes. You can use Netron to visualize the Yolov5 ONNX weights and get the information. Use the command below to get the IR files.

 

Model Optimizer command: python mo.py --input_model model.onnx --output Conv_410,Conv_322,Conv_498 --input_shape [1,3,512,512]

 

I attached the screenshot of the conversion below.

 

Regards,

Aznie

 

0 Kudos
TonyWong
Beginner
5,111 Views

Wow. Many thanks.

Quick question. May I ask why the shape is for Width and Height is  512?

 

0 Kudos
IntelSupport
Community Manager
5,069 Views

Hi TonyWong,

The 512 value is the standard size used for Model Optimizer. It is not the input shape of your model. If you are not specifying the input shape, Model Optimizer will use the input shape of the original model. If you specify the input shape as 512, the IR generated will be in 512 sizes.

 

Regards,

Aznie


0 Kudos
TonyWong
Beginner
5,046 Views

Thank you ,Aznie.

One thing I want to mention: 

that although your 512 parameter does make the mo.py pass.

Here is my modification that it can really produce some result.

mo_command = f"""mo
--input_model "{onnx_path}"
--input_shape [1,3,480,640]
--output Conv_410,Conv_322,Conv_498
--output_dir "{model_path.parent}"
"""
But the result format is not as expected, I'm still working on understanding these results.

 

FYI, your documentation are really full of problems.

For example: 

this tutorials are badly outdated.

https://www.youtube.com/watch?v=6Ww_zLDGfII&list=PLg-UKERBljNxdIQir1wrirZJ50yTp4eHv&index=5&ab_channel=IntelSoftware

 

Also, your official documentation , even the first page:  

https://docs.openvino.ai/latest/openvino_docs_get_started_get_started_linux.html

 

has problems:  when I ran this command: 

./demo_squeezenet_download_convert_run.sh

Errors as :   (I'm using Ubuntu 20.04, and OpenVino 2021.4,latest so far)

CMake Error at object_detection_sample_ssd/config/InferenceEngineConfig.cmake:36 (message):
File or directory
/opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp/object_detection_sample_ssd/external/tbb/cmake
referenced by variable _tbb_dir does not exist !
Call Stack (most recent call first):
object_detection_sample_ssd/config/InferenceEngineConfig.cmake:114 (set_and_check)
CMakeLists.txt:237 (find_package)
object_detection_sample_ssd/CMakeLists.txt:5 (ie_add_sample)

 

Hope these feedback can be of any help.

 

0 Kudos
TonyWong
Beginner
5,044 Views

FYI, for the error mentioned in last post:

it can be fixed by change the line 31 of opt/intel/openvino_2021/inference_eggine/samples/cpp/object_detection_sample_ssd/config/InferenceEngineConfig.cmake file into:


get_filename_component(PACKAGE_PREFIX_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../" ABSOLUTE)

 

But immediately, other errors :

set_target_properties Can not find target to add properties to:
IE::inference_engine

 

These errors on tutorial, is really a torture for engineers who want to use OpenVino.  crying~~~~~~~~~~~

 

0 Kudos
TonyWong
Beginner
5,040 Views

Sigh~~~~~~, even I managed to fix all these parameter path issue.

and also changed the line 133 

include("${CMAKE_CURRENT_LIST_DIR}/InferenceEngineTargets.cmake")

I've searched everywhere on my local. still cannot found it.

 

Can you help?

(yeah, I already skip and move forward, but similar issues keeps happening.) 

 

 

 

0 Kudos
IntelSupport
Community Manager
5,024 Views

Hi TonyWong,

You are unable to get the IR file using the command? What do you mean the result is not as expected? Please share the error you got when running the Model Optimizer.

 

Thus, for the demo_squeezenet_download_convert_run.sh failure, it might related to the ngraph.This demo is to verify your installation is successful. Could you confirm you completed all the steps in Install Intel® Distribution of OpenVINO™ toolkit for Linux* ?

Please make sure you are installing all OpenVINO packages. Make sure you are following the installation steps carefully.You may have forgotten install external dependencies. Please refer to the following link to install them.

https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-external-dependencies

 

Meanwhile, I can run the demo_squeezenet_download_convert_run.sh successfully as below:

demo_squeezenetdemo_squeezenet

 

Regards,

Aznie

 

0 Kudos
TonyWong
Beginner
5,012 Views

Hi Aznie,

 

I'm not blaming anyone.

I know you are here trying to help me.

And I do appreciate it.

My company is trying to use your OpenVino on our Cloud infrastructure on all our ML inference machines, which can potentially save a lot.

 

Firstly,  the 512, 512 parameter you provided works when producing the IE file. (but when doing the inference, it complains dimension error, so I changed it into  

--input_shape [1,3,480,640]

This is just FYI, and though the output format is no longer the same with ONNX. in Netron.

Maybe that's by design.   (  can you confirm that? )

At least, it can produced something.

 

Also, there's a time that I have 

And yes, I have followed this link:

https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html#...

to install everything.

 

As you can see, the build errors I mentioned above are path changing issues.  

And I can fix them for path changing.  

 

As the screenshot that you listed above, yes, there is a time that I have run that script and get your result.  

But not last time, which I've found some path configuration error.

 

Best wishes.

I will create a new VM  for ubuntu20.04, and start from scratch. Hope this time, no silly error.   

 

Let's close this question. And I would post other question. 

 

Again, thank you, Aznie.

Best wishes.

0 Kudos
IntelSupport
Community Manager
4,994 Views

Hi TonyWong,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


0 Kudos
Reply