Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6594 ディスカッション

Python inference gives always same result on quantized model regardless of input

dkwiatko
従業員
2,747件の閲覧回数

Hello, i described my situation here

https://github.com/openvinotoolkit/openvino/issues/4317 

In general my inference code in Python "just works", I get result corresponding to input.

However after default pot quantization, inference gives always the same result regardless input I pass to executable network.

Have anyone spotted similar behavior?  

 

ラベル(1)
0 件の賞賛
1 解決策
Iffa_Intel
モデレーター
2,715件の閲覧回数

Greetings,

 

I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.

 

For Ori model:

Latency: 18.90 ms

Throughput: 191.67 FPS

 

For FP32 model:

Latency: 13.02 ms

Throughput: 299.82 FPS

 

For Quantized model:

Latency: 9.12 ms

Throughput: 456.67 FPS

 

Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.

 

You may refer to my attachment for further detail. (Download them)

I had attached the config files together.

 

Hope this helps!

 

Sincerely,

Iffa

元の投稿で解決策を見る

2 返答(返信)
Iffa_Intel
モデレーター
2,716件の閲覧回数

Greetings,

 

I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.

 

For Ori model:

Latency: 18.90 ms

Throughput: 191.67 FPS

 

For FP32 model:

Latency: 13.02 ms

Throughput: 299.82 FPS

 

For Quantized model:

Latency: 9.12 ms

Throughput: 456.67 FPS

 

Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.

 

You may refer to my attachment for further detail. (Download them)

I had attached the config files together.

 

Hope this helps!

 

Sincerely,

Iffa

Iffa_Intel
モデレーター
2,616件の閲覧回数

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 



Sincerely,

Iffa


返信