Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6571 Discussions

Is Einsum(opset7) implemented for Google OWLv2 ?

AlbertS
Beginner
3,714 Views

Hi, I'm trying to compile "owlv2-base-patch16-finetuned" in onnx format and receive:

 

RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported

; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)

Labels (2)
0 Kudos
19 Replies
Aznie_Intel
Moderator
3,657 Views

Hi Alberts,

 

Thank you for reaching out. Could you please confirm the version of your OpenVINO? The error suggests that the Intel GPU and CPU plugins do not support the "Einsum" operation in OpenVINO. However, the latest version, 2024.6, does support this operation. You may refer to the Supported Operations for ONNX documentation for more details.

 

 

Regards,

Aznie


0 Kudos
AlbertS
Beginner
3,601 Views

Hi

 

Thank you for a replay. Yes, I used 2024.6.0 and saw support for Einsum in the documentation,  so I was also surprised. Can you for example try to compile this one:  https://huggingface.co/Xenova/owlv2-base-patch16-finetuned ?

0 Kudos
Aznie_Intel
Moderator
3,572 Views

Hi Alberts,

 

Could you please let me know how you downloaded the model files?

 

 

Regards,

Aznie


0 Kudos
Aznie_Intel
Moderator
3,522 Views

 

Hi Alberts,

 

We have validated the issue and will escalate it to the engineering team for further investigation. I will provide you with updates as soon as the information becomes available.

 

 

Regards,

Aznie


0 Kudos
Witold_Intel
Employee
3,500 Views

Hi Albert,


Thanks for reporting this case to us. Could you tell us your platform and NPU driver version? We should check if there's a compatibility issue.


0 Kudos
AlbertS
Beginner
3,488 Views

Hi

 

This is Intel NUC12WSKi7 /12th Gen Intel(R) Core(TM) i7-1260P, 2100 Mhz.

I don't think I have NPU Driver installed, I can't see any "Neural Processor" in Device Manager (Win)... 

0 Kudos
Witold_Intel
Employee
3,474 Views

Thank you, true, this is an ONNX model, so NPU driver is not required.


What command was used for compilation/prediction? Has it been similar to this demo? https://docs.openvino.ai/2024/openvino-workflow/model-server/ovms_demo_using_onnx_model.html


0 Kudos
AlbertS
Beginner
3,441 Views
model_path = "owlv2-base-patch16-finetuned.onnx"
core = ov.Core()
model_onnx = core.read_model(model=model_path)
compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")

compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")
File ".\venv\lib\site-packages\openvino\runtime\ie_api.py", line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported

; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)

0 Kudos
Witold_Intel
Employee
3,292 Views

Thank you for the command, I will try it out and get back to you.


0 Kudos
Witold_Intel
Employee
2,894 Views

I would just like to check up if the issue still occurs on your side perhaps? Maybe this case can be closed?


0 Kudos
Witold_Intel
Employee
2,580 Views

I just wanted to share that I am still diagnosing your case and will return to you as soon as possible. Thank you for your patience.


0 Kudos
AlbertS
Beginner
2,508 Views

Hi!

Thank you for the information and your involvement. I believe it is important for the community to clarify this, as the issue is not limited to just this model....

0 Kudos
Witold_Intel
Employee
2,406 Views

Hello,


I reported your issue to OpenVINO developers. In the meantime, I can suggest a few workarounds:


a) retrying with OpenVINO 2025.0


b) using ONNX runtime until a permanent fix is found


import onnxruntime as ort


# Load the ONNX model

model_path = "owlv2-base-patch16-finetuned.onnx"

session = ort.InferenceSession(model_path)


# Prepare input data

input_name = session.get_inputs()[0].name

input_shape = session.get_inputs()[0].shape

input_data = ... # Replace with your input data as a numpy array


# Run inference

outputs = session.run(None, {input_name: input_data})

print(outputs)


c) Einsum replacement with MathMul or ReduceSum


import onnx

from onnx import helper


# Load the ONNX model

model_path = "owlv2-base-patch16-finetuned.onnx"

model = onnx.load(model_path)


# Find the Einsum node

einsum_node = next(node for node in model.graph.node if node.op_type == "Einsum")


# Replace the Einsum node with supported operations

# Example: Replace with MatMul and ReduceSum (this is just a placeholder; you need to implement the actual logic)

new_node_1 = helper.make_node(

  'MatMul',

  einsum_node.input,

  ['intermediate_output'],

  name=einsum_node.name + '_matmul'

)

new_node_2 = helper.make_node(

  'ReduceSum',

  ['intermediate_output'],

  einsum_node.output,

  name=einsum_node.name + '_reduce_sum'

)


# Remove the old Einsum node and add the new nodes

model.graph.node.remove(einsum_node)

model.graph.node.extend([new_node_1, new_node_2])


# Save the modified model

modified_model_path = "owlv2-base-patch16-finetuned_modified.onnx"

onnx.save(model, modified_model_path)


d) conversion with Model Optimizer


mo --input_model owlv2-base-patch16-finetuned.onnx


You're welcome to try one or more of those and let us know if it has worked for you.


0 Kudos
Witold_Intel
Employee
2,323 Views

Hello, I will intervene with this issue as it has not been assigned to a developer yet. Please bear with me.


0 Kudos
Witold_Intel
Employee
2,289 Views

Hi Albert,


I have an update for you. A developer responsible for Einsum features extension has ran your model on both OpenVINO 2024.6 and 2025.1.0.dev20250314 nightly build. The first one displayed the same issue but the other one has worked. In that case I would recommend moving to the nightly build as best fix at the moment.


OpenVINO version: 2025.1.0-18477-ac3469bb5f3

Model compiled successfully


Can we support you further?




0 Kudos
AlbertS
Beginner
2,181 Views

Ok, that explains it all, thanks for including the fix in the new version!

0 Kudos
Witold_Intel
Employee
2,126 Views

Thanks for acknowledgment, in this case I will deescalate this issue. Good luck with your further endeavors with OpenVINO.


0 Kudos
Aznie_Intel
Moderator
2,008 Views

Hi AlbertS,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question. 



Regards,

Aznie




0 Kudos
Reply