- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have an IR model which is inferencing properly on CPU and iGPU, but when we run it on intel discrete graphics cards the whole system crashes and inferencing doesn't even start.
It is getting stuck at the point when we call infer request.
results = compiled_model.infer_new_request({0:I0,1:I1})
I am using Openvino API 2.0 for the inferencing.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ShashankKumar,
Thanks for reaching out.
Discrete graphics supported with OpenVINO are only Intel® Data Center GPU Flex Series and Intel® Arc GPU. You may refer to the System Requirement documentation.
Meanwhile, below is an example of multi-device execution with GPUs as a target device.
compiled_model=core.compile_model(model,"MULTI:GPU.1,GPU.0")
Hello Query Device Python Sample can be used to print all available devices with their supported metrics and default values for configuration parameters
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ShashankKumar,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page