Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

Is it possible to use onnxruntime in openvino pipeline with RemoteTensor

UlkuTuncerKucuktas
1,046 Views

https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

In here it is shown that it is possible to define remote context. Is it possible to connect it with a RemoteTensor.

ov::Core core;
auto gpu_context = core.get_default_context("GPU.1");
std::shared_ptr<ov::Model> model = core.read_model("model.xml");
ov::CompiledModel compiled_model = core.compile_model(model, "GPU.1");
ov::InferRequest model_infer =compiled_model.create_infer_request();

ov::RemoteTensor output_tensor = gpu_context.create_tensor(..);
model.set_tensor(..,output_tensor);

how to feed it to onnxruntime model or how to get output from it as a remote tensor

0 Kudos
4 Replies
Iffa_Intel
Moderator
1,014 Views

Hi,

 

for inferencing, you need to feed in a converted model into inferencing code.

The one that you have shown above is an inferencing code.

 

This part should be the location of your converted ONNX model:

std::shared_ptr<ov::Model> model = core.read_model("model.xml");

 

Any changes required to your original model need to be done before integrating it as that is beyond OpenVINO scope.

 

General flow:

Neural Network model (ONNX,TF,etc) --> convert into OpenVINO IR format (xml & bin) --> infer it with OpenVINO

 

Cordially,

Iffa

 

 

0 Kudos
UlkuTuncerKucuktas
976 Views

Thanks for the answer but what I want to do is not convert that model to Openvino IR. My model is dynamic and some operations in it causes problem in GPU. I want to get an output from Openvin IR put that to onnx runtime. 

Pipeline 

Openvino IR --------> Onnx Runtime --------> Openvino IR

 

I am asking If its possible to make the transfers with Remote tensor. And if its possible how ? 

0 Kudos
Iffa_Intel
Moderator
905 Views

Hi,


After some clarifications, this conversion seems to be not possible.


ov.runtime can only read IR or ONNX model and also ONNX runtime cannot read IR file.

Once the original model is optimized to IR we can't get the model back to ONNX and recompress again to IR.



Cordially,

Iffa


0 Kudos
Iffa_Intel
Moderator
854 Views

Hi,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Cordially,

Iffa


0 Kudos
Reply