- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hello, i described my situation here
https://github.com/openvinotoolkit/openvino/issues/4317
In general my inference code in Python "just works", I get result corresponding to input.
However after default pot quantization, inference gives always the same result regardless input I pass to executable network.
Have anyone spotted similar behavior?
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Greetings,
I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.
For Ori model:
Latency: 18.90 ms
Throughput: 191.67 FPS
For FP32 model:
Latency: 13.02 ms
Throughput: 299.82 FPS
For Quantized model:
Latency: 9.12 ms
Throughput: 456.67 FPS
Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.
You may refer to my attachment for further detail. (Download them)
I had attached the config files together.
Hope this helps!
Sincerely,
Iffa
Link kopiert
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Greetings,
I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.
For Ori model:
Latency: 18.90 ms
Throughput: 191.67 FPS
For FP32 model:
Latency: 13.02 ms
Throughput: 299.82 FPS
For Quantized model:
Latency: 9.12 ms
Throughput: 456.67 FPS
Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.
You may refer to my attachment for further detail. (Download them)
I had attached the config files together.
Hope this helps!
Sincerely,
Iffa
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa

- RSS-Feed abonnieren
- Thema als neu kennzeichnen
- Thema als gelesen kennzeichnen
- Diesen Thema für aktuellen Benutzer floaten
- Lesezeichen
- Abonnieren
- Drucker-Anzeigeseite