Intel® High Level Design
Support for Intel® High Level Synthesis Compiler, DSP Builder, OneAPI for Intel® FPGAs, Intel® FPGA SDK for OpenCL™
Announcements
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.
504 Discussions

Getting OpenVINO Inference Time per Layer

E-Hong
New Contributor I
362 Views

Are there any way or method to obtain the inference time per layer of a deep learning model on OpenVINO?

 

Thank you.

0 Kudos
4 Replies
Hazlina_R_Intel
Moderator
348 Views

Hi,

May I know which device are you targeting to use the OpenVino with? Are you targeting to use one of the CPUs processors OR the FPGA?


-Hazlina


E-Hong
New Contributor I
345 Views

Hi Hazlina, 

I am targeting both CPUs and FPGA.

Hazlina_R_Intel
Moderator
335 Views

Hi,

We will be able to answer questions related to the FPGA, for the inference time. For processor/CPU related, please raise a new question under OpenVino forums here: https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit


For your question on the inference time, which deep learning model are you interested with? Can you be more specific on your use cases? I will get an engineer to look into this.


-Hazlina


E-Hong
New Contributor I
327 Views

Hi Hazlina,

I am using a VGG16/19 deep learning model. My goal is to run inference targeting the Arria 10 PAC card. I am able to calculate the time taken for the whole inference process, but I hope to time the inference time per model's layer as well.

 

Thank you.

Reply