We will be able to answer questions related to the FPGA, for the inference time. For processor/CPU related, please raise a new question under OpenVino forums here: https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit
For your question on the inference time, which deep learning model are you interested with? Can you be more specific on your use cases? I will get an engineer to look into this.
I am using a VGG16/19 deep learning model. My goal is to run inference targeting the Arria 10 PAC card. I am able to calculate the time taken for the whole inference process, but I hope to time the inference time per model's layer as well.