- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have an IR model which I can run properly for multiple batches on the CPU and for batch size "1" it can be run on the iGPU, but with batch size greater than "1" it doesn't run on iGPU.
I am using " set_batch(compiled_model, batch_size) " to set the batch size for inferencing and using Openvino API 2.0.
Can you please suggest how I could enable inferencing of batch_size>1 on iGPU?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ShashankKumar,
Thanks for reaching out to us.
Referring to Core class, compile_model will return openvino.runtime.CompiledModel. On another note, set_batch require openvino.runtime.Model.
Could you please change openvino.runtime.CompiledModel to openvino.runtime.Model and see if your issue was able to resolve?
On the other hand, if you still encounter the same error, could you please share the following information with us to further assist you?
· Inference scripts
· Model in IR format
· Input file
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ShashankKumar,
Thanks for your question.
If you need any additional information from Intel, please submit a new question as this thread will no longer be monitored.
Best regards,
Wan
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page