- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, i described my situation here
https://github.com/openvinotoolkit/openvino/issues/4317
In general my inference code in Python "just works", I get result corresponding to input.
However after default pot quantization, inference gives always the same result regardless input I pass to executable network.
Have anyone spotted similar behavior?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.
For Ori model:
Latency: 18.90 ms
Throughput: 191.67 FPS
For FP32 model:
Latency: 13.02 ms
Throughput: 299.82 FPS
For Quantized model:
Latency: 9.12 ms
Throughput: 456.67 FPS
Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.
You may refer to my attachment for further detail. (Download them)
I had attached the config files together.
Hope this helps!
Sincerely,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
I've checked the POT with mobilenet-v2-pytorch and tested the ori model, converted FP32 model and Quantized model with the benchmark_app. Each produces a different performance.
For Ori model:
Latency: 18.90 ms
Throughput: 191.67 FPS
For FP32 model:
Latency: 13.02 ms
Throughput: 299.82 FPS
For Quantized model:
Latency: 9.12 ms
Throughput: 456.67 FPS
Besides, I tested inferencing on the Quantized model and give different input and the result is good so far.
You may refer to my attachment for further detail. (Download them)
I had attached the config files together.
Hope this helps!
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page