- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
In here it is shown that it is possible to define remote context. Is it possible to connect it with a RemoteTensor.
ov::Core core;
auto gpu_context = core.get_default_context("GPU.1");
std::shared_ptr<ov::Model> model = core.read_model("model.xml");
ov::CompiledModel compiled_model = core.compile_model(model, "GPU.1");
ov::InferRequest model_infer =compiled_model.create_infer_request();
ov::RemoteTensor output_tensor = gpu_context.create_tensor(..);
model.set_tensor(..,output_tensor);
how to feed it to onnxruntime model or how to get output from it as a remote tensor
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
for inferencing, you need to feed in a converted model into inferencing code.
The one that you have shown above is an inferencing code.
This part should be the location of your converted ONNX model:
std::shared_ptr<ov::Model> model = core.read_model("model.xml");
Any changes required to your original model need to be done before integrating it as that is beyond OpenVINO scope.
General flow:
Neural Network model (ONNX,TF,etc) --> convert into OpenVINO IR format (xml & bin) --> infer it with OpenVINO
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the answer but what I want to do is not convert that model to Openvino IR. My model is dynamic and some operations in it causes problem in GPU. I want to get an output from Openvin IR put that to onnx runtime.
Pipeline
Openvino IR --------> Onnx Runtime --------> Openvino IR
I am asking If its possible to make the transfers with Remote tensor. And if its possible how ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
After some clarifications, this conversion seems to be not possible.
ov.runtime can only read IR or ONNX model and also ONNX runtime cannot read IR file.
Once the original model is optimized to IR we can't get the model back to ONNX and recompress again to IR.
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Cordially,
Iffa
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page