Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6462 Discussions

Check 'PartialShape(shape).refines(get_partial_shape())' failed at core/src/runtime/host_tensor.cpp:

yaswanthan
Beginner
472 Views

Hi guys,

          I am trying to convert the custom efficient- det model to onnx and to change as IR (openvino) format. I am able to convert the model to onnx. But when converting to onnx to openvino, I get the error as follows 

 

"Check 'PartialShape(shape).refines(get_partial_shape())' failed at core/src/runtime/host_tensor.cpp:141:
Allocation shape {1} must be compatible with the partial shape: {0}"

 

Note: I am attaching a compressed file so you may reproduce this issue on your end (I will appreciate it!). 

Thank you and will be waiting for your reply,

 

 

 

 

 

 

0 Kudos
1 Solution
Peh_Intel
Moderator
432 Views

Hi yaswanthan,


For your information, you can directly load ONNX model into OpenVINO™ toolkit Inference Engine without converting into Intermediate Representation (IR).


I was able to convert your ONNX model into Intermediate Representation (IR) with OpenVINO™ toolkit 2022.1.0. However, when inferencing the IR with Benchmark App, I got the same errors as you. Hence, I proceeded to infer ONNX model directly with Benchmark App but also getting the same errors.


Please have a check on your ONNX model to validate that the ONNX model itself is working fine.



Regards,

Peh


View solution in original post

0 Kudos
2 Replies
Peh_Intel
Moderator
433 Views

Hi yaswanthan,


For your information, you can directly load ONNX model into OpenVINO™ toolkit Inference Engine without converting into Intermediate Representation (IR).


I was able to convert your ONNX model into Intermediate Representation (IR) with OpenVINO™ toolkit 2022.1.0. However, when inferencing the IR with Benchmark App, I got the same errors as you. Hence, I proceeded to infer ONNX model directly with Benchmark App but also getting the same errors.


Please have a check on your ONNX model to validate that the ONNX model itself is working fine.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
399 Views

Hi yaswanthan,


Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.



Regards,

Peh


0 Kudos
Reply