- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi sir,
When I run models using OVMS, I find that CPU or GPU utilization is much lower than when deployed locally.
Here is my configuration:
CPU model name: Intel(R) Core(TM) Ultra 7 165H
Driver Version: i915 | 24.13.29138.7
Openvino version: 2024.2
Please refer to my link in attachment; the video of the test results is included
Thanks
Best Regards
Jacko
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi WT_Jacko,
Thank you for reporting the issue.
We are checking this out and will get back to you soon.
Regards,
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zul,
May i know any updates?
Thanks
Best Regards
Jacko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zul,
May i know any updates?
Thanks
Best Regards
Jacko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi WT_Jacko,
Thank you for your patience. This issue could be related to different parameters that have been used. Could you share the parameters you used to train your model?
Regards
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zul,
Could you please let me know which training model parameters you require?
This link documents my test results. You can clearly see that the performance differs significantly when using the same model and hardware locally versus with OVMS. Is this difference normal(locally : 50ms, OVMS : 550ms)?
Thanks for your help
Best Regards
Jacko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi WT_Jacko,
I'd like to remind that source files would help us to understand this issue and conduct tests. Could you please send the code to us?
Kind regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Witold
The command currently executed by the customer in OVMS is as follows:
- sudo docker run -it --rm --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -v ${PWD}/models:/models -p 8000:8000 -p 9000:9000 openvino/model_server:2024.2-gpu --model_name medicine --model_path /models/medicine --port 9000 --rest_port 8000 --target_device GPU
We have confirmed with the customer that both locally and on OVM, they are running the same demo.py and using the same model. Note: demo.py is the customer's own script
We are asking the customer if there's a chance we could get the source files for demo.py and the AI model.
Thanks
Best Regards
Jacko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi WT_Jacko,
You mentioned that you are running the same model both locally and on OVMS, but we noticed that file names and command line prompts are different. The first model uses ObjectDetectionV2IR.Inference and the second one seem to use a different, like more time-consuming inference method. Can you share the source files with us? From the videos, we are unable to tell the differences.
Regards,
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @WT_Jacko, thank you for the detailed explanation. In this case we are waiting for the "medicine" model and "demo.py" script.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Jacko, can we receive the files for testing? Is further support required from our side? Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Jacko, can we receive the files for testing? Is further support required from our side?
I will have to close the support ticket if there is no reply from your side for 7 business days. Thanks for understanding.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sir,
Due to customer company policies, our client is currently unable to share their internal software. We've asked them to update the Compute Runtime libraries from https://github.com/intel/compute-runtime/releases to see if it improves performance. If there are any further issues, they will create a ticket. Thanks!
Thanks
Best Regards
Jacko
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacko,
many thanks for the update. I'm deescalating the case then.
Kind regards,
Witold
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page