- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The time of multi-process calls to openvino in 2024 has skyrocketed, for example, when two processes call openvino to deploy the same model, the time of a single inference has doubled. What should I do about it
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Could you clarify:
- Which OpenVINO sample application/code are you using?
- Which model are you using for this use case? (if possible, please help to share details of model(eg custom model/pretrained model), relevant model files and conversion steps that you did)
- Your OS and hardware details
- How did you measure the processing time?
- What is your expected time/benchmark timing for the processing?
- Does this recently happened where previously it work as expected?
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thank you for your question. If you need any additional information from Intel, please submit a new question as Intel is no longer monitoring this thread.
Cordially,
Iffa
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page