Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Hi sir,
When I run models using OVMS, I find that CPU or GPU utilization is much lower than when deployed locally.
Here is my configuration:
CPU model name: Intel(R) Core(TM) Ultra 7 165H
Driver Version: i915 | 24.13.29138.7
Openvino version: 2024.2
Please refer to my link in attachment; the video of the test results is included
Thanks
Best Regards
Jacko
Link Copied
Hi WT_Jacko,
Thank you for reporting the issue.
We are checking this out and will get back to you soon.
Regards,
Zul
Hi Zul,
May i know any updates?
Thanks
Best Regards
Jacko
Hi Zul,
May i know any updates?
Thanks
Best Regards
Jacko
Hi WT_Jacko,
Thank you for your patience. This issue could be related to different parameters that have been used. Could you share the parameters you used to train your model?
Regards
Zul
Hi Zul,
Could you please let me know which training model parameters you require?
This link documents my test results. You can clearly see that the performance differs significantly when using the same model and hardware locally versus with OVMS. Is this difference normal(locally : 50ms, OVMS : 550ms)?
Thanks for your help
Best Regards
Jacko
Hi WT_Jacko,
I'd like to remind that source files would help us to understand this issue and conduct tests. Could you please send the code to us?
Kind regards,
Hi Witold
The command currently executed by the customer in OVMS is as follows:
We have confirmed with the customer that both locally and on OVM, they are running the same demo.py and using the same model. Note: demo.py is the customer's own script
We are asking the customer if there's a chance we could get the source files for demo.py and the AI model.
Thanks
Best Regards
Jacko
Hi WT_Jacko,
You mentioned that you are running the same model both locally and on OVMS, but we noticed that file names and command line prompts are different. The first model uses ObjectDetectionV2IR.Inference and the second one seem to use a different, like more time-consuming inference method. Can you share the source files with us? From the videos, we are unable to tell the differences.
Regards,
Zul
Hello @WT_Jacko, thank you for the detailed explanation. In this case we are waiting for the "medicine" model and "demo.py" script.
Hello Jacko, can we receive the files for testing? Is further support required from our side? Thank you.
Hello Jacko, can we receive the files for testing? Is further support required from our side?
I will have to close the support ticket if there is no reply from your side for 7 business days. Thanks for understanding.
Hi Sir,
Due to customer company policies, our client is currently unable to share their internal software. We've asked them to update the Compute Runtime libraries from https://github.com/intel/compute-runtime/releases to see if it improves performance. If there are any further issues, they will create a ticket. Thanks!
Thanks
Best Regards
Jacko
Hi Jacko,
many thanks for the update. I'm deescalating the case then.
Kind regards,
Witold
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.