Community
cancel
Showing results for 
Search instead for 
Did you mean: 
GAnthony_R_Intel
Employee
197 Views

Benchmarking tool gives more iterations than requested

When I use the Python benchmark_app.py tool, it seems to be giving me additional iterations.  For example, if I set -niter=1, then I am getting 14 inference requests.  I'm not sure if there's something wrong I am doing here.

 

(decathlon3D) [bduser@merlin-param01 FP32]$ python /opt/intel/openvino/deployment_tools/inference_engine/samples/python_samples/benchmark_app/benchmark_app.py -m saved_model.xml -b 1 -niter 1
[Step 1/11] Parsing and validating input arguments
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         2.0.custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
[ INFO ] Device is CPU
         CPU
         MKLDNNPlugin............ version 2.0
         Build................... 27579

[Step 3/11] Read the Intermediate Representation of the network
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1, precision FP32
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[Step 8/11] Setting optimal runtime parameters
[ WARNING ] Number of iterations was aligned by request number from 1 to 14 using number of requests 14
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'MRImages' precision FP32, dimensions (NCHW): 1 4 144 144
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 8 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 9 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 10 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 11 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 12 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[ INFO ] Infer Request 13 filling
[ INFO ] Fill input 'MRImages' with random values (some binary data is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 14 inference requests using 14 streams for CPU, limits: 14 iterations)
[Step 11/11] Dumping statistics report
Count:      14 iterations
Duration:   72.51 ms
Latency:    31.3802 ms
Throughput: 193.07 FPS

 

 

 

 

0 Kudos
7 Replies
Shubha_R_Intel
Employee
197 Views

Dear G Anthony R.,

You are definitely not doing anything wrong. Could it be that there is a minimum number of iterations required for the model to converge ?

Thanks,

Shubha

GAnthony_R_Intel
Employee
197 Views

I don't believe so.  It's a pre-trained model and I am just doing inference. So there shouldn't be any model convergence.

 

Best.

Very respectfully,

-Tony

Shubha_R_Intel
Employee
197 Views

Dear G Anthony R.,

Fair enough. I guess "converge" was a bad word choice. What I meant was a minimum number of iterations is required in order to gain confidence in the measurements. I can double check this for you though. I will let you know,

Shubha

GAnthony_R_Intel
Employee
197 Views

Oh I see. Ok. Yes, I understand now.  I agree that there should probably be several iterations to determine the true latency and FPS of the model.  I was just puzzled that when I specified -niters 1 that it would not just do that many iterations. So I wanted to check and get a sense as to why is was doing more iterations than manually specified.

Best.

-Tony

Shubha_R_Intel
Employee
197 Views

Dear G Anthony R. 

I am still checking on the precise reason why. Will post here once I find out.

Shubha

Shubha_R_Intel
Employee
197 Views

Dear G Anthony R. ,

The default is an "optimal number of async infer requests", which is device specific. Please  set -api sync and -niter 1 and benchmark_app should work as you expect.

Thanks,

Shubha

GAnthony_R_Intel
Employee
197 Views

Cool. Thanks Shubha.

Best.

Very respectfully,

-Tony

Reply