Acer can run our own speech model in in Live Speech Recognition Demo app(CPU information: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz).Do you have any suggestions how can we observe GNA resources allocation, such as memory usage and GNA loading, in order that we could have evaluations of how many size model can be loaded with best performance.
Any advices from Intel side will do good help for us, many thanks.
Thanks for reaching out to us.
For your information, GNA plugin is designed for offloading continuous inference workloads including but not limited to noise reduction or speech recognition to save power and free CPU resources.
When running Live Speech Recognition Demo with GNA plugin, CPU memory usage is decrease compared to run the demo with CPU plugin.
You can observe the CPU memory usage in Performance Monitor.
- Search Performance Monitor in Window Search Box, and run the App.
- From the Menu on the left side, expand Monitoring Tools, and click on Performance Monitor.
- You can observe the memory usage in the Line Graph.
- Run the demo with CPU first, and then followed by GNA. Performance monitor shows drop in CPU residency with GNA Plugin.
I would like to observe GNA status, especially loading more than one model.
While GNA multi-thread, do you have any tool which can show GNA loading which just like CPU usage%?
Unfortunately, we do not have tool to directly observe GNA status.
You may interest with this How to Interpret Performance Counters for collecting performance counters with the Inference Engine API.
This thread will no longer be monitored since we have provided suggestion. If you need any additional information from Intel, please submit a new question.