- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are there any way or method to obtain the inference time per layer of a deep learning model on OpenVINO?
Thank you.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
May I know which device are you targeting to use the OpenVino with? Are you targeting to use one of the CPUs processors OR the FPGA?
-Hazlina
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hazlina,
I am targeting both CPUs and FPGA.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We will be able to answer questions related to the FPGA, for the inference time. For processor/CPU related, please raise a new question under OpenVino forums here: https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit
For your question on the inference time, which deep learning model are you interested with? Can you be more specific on your use cases? I will get an engineer to look into this.
-Hazlina
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hazlina,
I am using a VGG16/19 deep learning model. My goal is to run inference targeting the Arria 10 PAC card. I am able to calculate the time taken for the whole inference process, but I hope to time the inference time per model's layer as well.
Thank you.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page