Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6509 Discussions

Difference between Inference time and actual running time for a sample for model.

RD_UTS
Beginner
978 Views

Hello,

I am using ultralytics YOLOv8 for detection task and running it in Open VINO IR format. The model gives output with times like pre-process time, inference time and post-process time. There I see the benefit of using Open VINO (almost 3 times faster). But when I used line_profile tool to profile my code, the line where I am calling the model to predict is taking much time then Normal format(Base model from ultralytics) model.

So, wanna know why it is that and can I improve the overall time taken by model prediction function? 

I have converted the model into Open VINO format and has 3 files .bin, .xml and metadata.yaml in one folder and while loading model I am passing the path of that folder.

The way I am calling the prediction function is:

results = model(input_img)

 

Thank you.

0 Kudos
6 Replies
Aznie_Intel
Moderator
930 Views

Hi RD_UTS,

 

Thanks for reaching out.

There could be several reasons that could affect your model's prediction time. One common cause can be fluctuations in computing resource availability. For instance, sharing resources on a multi-user system or a background process that occasionally uses significant processing power, it would affect your model's prediction time. Another aspect to consider is the complexity of the images, a higher complexity can slow down predictions as the model needs to dissect and analyze more components within the image. You may submit a request to Ultralytics Github page if you need more details regarding this.

 

As in OpenVINO, running OpenVINO™ inference with IR model format offers the best possible results as the model is already converted. This format provides lower first-inference latency and options for model optimizations. This format is the most optimized for OpenVINO™ inference hence it can be deployed with maximum performance. 

 

Hope this helps.

 

 

Regards,

Aznie


0 Kudos
RD_UTS
Beginner
901 Views

Hi Aznie,

 

Here are the more details about my system and execution time.

 

I am using ubuntu on intel core i7.

Here are the exact results I got from profiler tool a line_profiler.

 

The model it self produce output in that I see:

Normal model (without OpenVINO) -> 4.4ms preprocess, 230.7ms inference, 1.7ms postprocess

OpenVINO IR model -> 3.7ms preprocess, 85.4ms inference, 2.7ms postprocess

 

Here I see the gain of OV IR model. But when I use line_profiler to profile the code it gives:

code line: results = model(sample_img)

Above line is taking

Normal model -> 2.6 sec

OV IR -> 4.2 sec

I am not getting why it is taking much time in OV IR format then normal model as I have advantage in inference time. 

0 Kudos
Aznie_Intel
Moderator
796 Views

Hi RD_UTS,

 

Which ultralytics YOLOv8 are you using? Can you run the model for both formats with Benchmark_App and compare the output?

 You can also share your model for us to validate from our end.

 

 

Regards,

Aznie


0 Kudos
RD_UTS
Beginner
735 Views

Hi Aznie,

 

Is there a way to include time spent  in input filling and output postprocess into Benchmark_App

Benchmark_App shows some steps (1-11) and some info about it but not the time taken into those steps. Is there a way to get time spent in those steps? 

 

Thank You.

 

0 Kudos
Aznie_Intel
Moderator
701 Views

Hi RD_UTS,

 

The OpenVINO benchmark only measures the time spent on actual inference (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).

 

You can check out All configuration Options with the Benchmark App and see if any available options suit your needs.

 

 

Regards,

Aznie

 


0 Kudos
Aznie_Intel
Moderator
502 Views

Hi RD_UTS,


This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
Reply