openVINO benchmark_app : this tool gives 2 option to specifiy precision
--infer_precision Optional. Specifies the inference precision. Example #1: '-infer_precision bf16'. Example #2: '-infer_precision CPU:bf16,GPU:f32'
other one is Preprocessing options:
-op <value> Optional. Specifies precision for all output layers of the model.
which is appropriate to run int8 precision?
链接已复制
2 回复数
Hi Bhuvaneshwara,
Thank you for reaching out.
You can run benchmark_app without the model precision option. You can try run the benchmark_app using this command:
benchmark_app -m model_name.xml
The Benchmark Tool demonstrates how to use the benchmark_app to estimate deep learning inference performance on supported devices.
Regards,
Zul
