Hi, I'm trying to compile "owlv2-base-patch16-finetuned" in onnx format and receive:
RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported
; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)
連結已複製
Hi Alberts,
Thank you for reaching out. Could you please confirm the version of your OpenVINO? The error suggests that the Intel GPU and CPU plugins do not support the "Einsum" operation in OpenVINO. However, the latest version, 2024.6, does support this operation. You may refer to the Supported Operations for ONNX documentation for more details.
Regards,
Aznie
Hi
Thank you for a replay. Yes, I used 2024.6.0 and saw support for Einsum in the documentation, so I was also surprised. Can you for example try to compile this one: https://huggingface.co/Xenova/owlv2-base-patch16-finetuned ?
Hi
You can download it from here: https://huggingface.co/Xenova/owlv2-base-patch16-finetuned/tree/main/onnx
Hi Alberts,
We have validated the issue and will escalate it to the engineering team for further investigation. I will provide you with updates as soon as the information becomes available.
Regards,
Aznie
Hi
This is Intel NUC12WSKi7 /12th Gen Intel(R) Core(TM) i7-1260P, 2100 Mhz.
I don't think I have NPU Driver installed, I can't see any "Neural Processor" in Device Manager (Win)...
Thank you, true, this is an ONNX model, so NPU driver is not required.
What command was used for compilation/prediction? Has it been similar to this demo? https://docs.openvino.ai/2024/openvino-workflow/model-server/ovms_demo_using_onnx_model.html
model_path = "owlv2-base-patch16-finetuned.onnx"
core = ov.Core()
model_onnx = core.read_model(model=model_path)
compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")
compiled_model = core.compile_model(model=model_onnx, device_name="AUTO")
File ".\venv\lib\site-packages\openvino\runtime\ie_api.py", line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src\inference\src\cpp\core.cpp:107:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\auto\src\auto_schedule.cpp:443:
[AUTO] compile model failed, GPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_gpu\src\plugin\program_builder.cpp:249:
Operation: /class_head/Einsum of type Einsum(opset7) is not supported
; CPU:Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\node.cpp:1535:
Unsupported operation of type: Einsum name: /class_head/Einsum
Details:
Exception from src\plugins\intel_cpu\src\nodes\reference.cpp:17:
Not Implemented:
Cannot fallback on ngraph reference implementation (Ngraph::Node::evaluate() is not implemented)
Hello,
I reported your issue to OpenVINO developers. In the meantime, I can suggest a few workarounds:
a) retrying with OpenVINO 2025.0
b) using ONNX runtime until a permanent fix is found
import onnxruntime as ort
# Load the ONNX model
model_path = "owlv2-base-patch16-finetuned.onnx"
session = ort.InferenceSession(model_path)
# Prepare input data
input_name = session.get_inputs()[0].name
input_shape = session.get_inputs()[0].shape
input_data = ... # Replace with your input data as a numpy array
# Run inference
outputs = session.run(None, {input_name: input_data})
print(outputs)
c) Einsum replacement with MathMul or ReduceSum
import onnx
from onnx import helper
# Load the ONNX model
model_path = "owlv2-base-patch16-finetuned.onnx"
model = onnx.load(model_path)
# Find the Einsum node
einsum_node = next(node for node in model.graph.node if node.op_type == "Einsum")
# Replace the Einsum node with supported operations
# Example: Replace with MatMul and ReduceSum (this is just a placeholder; you need to implement the actual logic)
new_node_1 = helper.make_node(
'MatMul',
einsum_node.input,
['intermediate_output'],
name=einsum_node.name + '_matmul'
)
new_node_2 = helper.make_node(
'ReduceSum',
['intermediate_output'],
einsum_node.output,
name=einsum_node.name + '_reduce_sum'
)
# Remove the old Einsum node and add the new nodes
model.graph.node.remove(einsum_node)
model.graph.node.extend([new_node_1, new_node_2])
# Save the modified model
modified_model_path = "owlv2-base-patch16-finetuned_modified.onnx"
onnx.save(model, modified_model_path)
d) conversion with Model Optimizer
mo --input_model owlv2-base-patch16-finetuned.onnx
You're welcome to try one or more of those and let us know if it has worked for you.
Hi Albert,
I have an update for you. A developer responsible for Einsum features extension has ran your model on both OpenVINO 2024.6 and 2025.1.0.dev20250314 nightly build. The first one displayed the same issue but the other one has worked. In that case I would recommend moving to the nightly build as best fix at the moment.
OpenVINO version: 2025.1.0-18477-ac3469bb5f3
Model compiled successfully
Can we support you further?
Ok, that explains it all, thanks for including the fix in the new version!
