Intel® High Level Design
Support for Intel® High Level Synthesis Compiler, DSP Builder, OneAPI for Intel® FPGAs, Intel® FPGA SDK for OpenCL™
663 Discussions

Getting OpenVINO Inference Time per Layer

E-Hong
New Contributor I
835 Views

Are there any way or method to obtain the inference time per layer of a deep learning model on OpenVINO?

 

Thank you.

0 Kudos
4 Replies
Hazlina_R_Intel
Moderator
821 Views

Hi,

May I know which device are you targeting to use the OpenVino with? Are you targeting to use one of the CPUs processors OR the FPGA?


-Hazlina


0 Kudos
E-Hong
New Contributor I
818 Views

Hi Hazlina, 

I am targeting both CPUs and FPGA.

0 Kudos
Hazlina_R_Intel
Moderator
808 Views

Hi,

We will be able to answer questions related to the FPGA, for the inference time. For processor/CPU related, please raise a new question under OpenVino forums here: https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit


For your question on the inference time, which deep learning model are you interested with? Can you be more specific on your use cases? I will get an engineer to look into this.


-Hazlina


0 Kudos
E-Hong
New Contributor I
800 Views

Hi Hazlina,

I am using a VGG16/19 deep learning model. My goal is to run inference targeting the Arria 10 PAC card. I am able to calculate the time taken for the whole inference process, but I hope to time the inference time per model's layer as well.

 

Thank you.

0 Kudos
Reply