- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I'm trying to compile "owlv2-base-patch16-finetuned" in onnx format and receive:
RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported
; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alberts,
Thank you for reaching out. Could you please confirm the version of your OpenVINO? The error suggests that the Intel GPU and CPU plugins do not support the "Einsum" operation in OpenVINO. However, the latest version, 2024.6, does support this operation. You may refer to the Supported Operations for ONNX documentation for more details.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
Thank you for a replay. Yes, I used 2024.6.0 and saw support for Einsum in the documentation, so I was also surprised. Can you for example try to compile this one: https://huggingface.co/Xenova/owlv2-base-patch16-finetuned ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alberts,
Could you please let me know how you downloaded the model files?
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
You can download it from here: https://huggingface.co/Xenova/owlv2-base-patch16-finetuned/tree/main/onnx
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alberts,
We have validated the issue and will escalate it to the engineering team for further investigation. I will provide you with updates as soon as the information becomes available.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Albert,
Thanks for reporting this case to us. Could you tell us your platform and NPU driver version? We should check if there's a compatibility issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
This is Intel NUC12WSKi7 /12th Gen Intel(R) Core(TM) i7-1260P, 2100 Mhz.
I don't think I have NPU Driver installed, I can't see any "Neural Processor" in Device Manager (Win)...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you, true, this is an ONNX model, so NPU driver is not required.
What command was used for compilation/prediction? Has it been similar to this demo? https://docs.openvino.ai/2024/openvino-workflow/model-server/ovms_demo_using_onnx_model.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
model_path = "owlv2-base-patch16-finetuned.onnx"
core = ov.Core()
model_onnx = core.read_model(model=model_path)
compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")
compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")
File ".\venv\lib\site-packages\openvino\runtime\ie_api.py", line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported
; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the command, I will try it out and get back to you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would just like to check up if the issue still occurs on your side perhaps? Maybe this case can be closed?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I just wanted to share that I am still diagnosing your case and will return to you as soon as possible. Thank you for your patience.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
Thank you for the information and your involvement. I believe it is important for the community to clarify this, as the issue is not limited to just this model....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I reported your issue to OpenVINO developers. In the meantime, I can suggest a few workarounds:
a) retrying with OpenVINO 2025.0
b) using ONNX runtime until a permanent fix is found
import onnxruntime as ort
# Load the ONNX model
model_path = "owlv2-base-patch16-finetuned.onnx"
session = ort.InferenceSession(model_path)
# Prepare input data
input_name = session.get_inputs()[0].name
input_shape = session.get_inputs()[0].shape
input_data = ... # Replace with your input data as a numpy array
# Run inference
outputs = session.run(None, {input_name: input_data})
print(outputs)
c) Einsum replacement with MathMul or ReduceSum
import onnx
from onnx import helper
# Load the ONNX model
model_path = "owlv2-base-patch16-finetuned.onnx"
model = onnx.load(model_path)
# Find the Einsum node
einsum_node = next(node for node in model.graph.node if node.op_type == "Einsum")
# Replace the Einsum node with supported operations
# Example: Replace with MatMul and ReduceSum (this is just a placeholder; you need to implement the actual logic)
new_node_1 = helper.make_node(
'MatMul',
einsum_node.input,
['intermediate_output'],
name=einsum_node.name + '_matmul'
)
new_node_2 = helper.make_node(
'ReduceSum',
['intermediate_output'],
einsum_node.output,
name=einsum_node.name + '_reduce_sum'
)
# Remove the old Einsum node and add the new nodes
model.graph.node.remove(einsum_node)
model.graph.node.extend([new_node_1, new_node_2])
# Save the modified model
modified_model_path = "owlv2-base-patch16-finetuned_modified.onnx"
onnx.save(model, modified_model_path)
d) conversion with Model Optimizer
mo --input_model owlv2-base-patch16-finetuned.onnx
You're welcome to try one or more of those and let us know if it has worked for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, I will intervene with this issue as it has not been assigned to a developer yet. Please bear with me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Albert,
I have an update for you. A developer responsible for Einsum features extension has ran your model on both OpenVINO 2024.6 and 2025.1.0.dev20250314 nightly build. The first one displayed the same issue but the other one has worked. In that case I would recommend moving to the nightly build as best fix at the moment.
OpenVINO version: 2025.1.0-18477-ac3469bb5f3
Model compiled successfully
Can we support you further?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, that explains it all, thanks for including the fix in the new version!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for acknowledgment, in this case I will deescalate this issue. Good luck with your further endeavors with OpenVINO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi AlbertS,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page