- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
openVINO benchmark_app : this tool gives 2 option to specifiy precision
--infer_precision Optional. Specifies the inference precision. Example #1: '-infer_precision bf16'. Example #2: '-infer_precision CPU:bf16,GPU:f32'
other one is Preprocessing options:
-op <value> Optional. Specifies precision for all output layers of the model.
which is appropriate to run int8 precision?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Bhuvaneshwara,
Thank you for reaching out.
You can run benchmark_app without the model precision option. You can try run the benchmark_app using this command:
benchmark_app -m model_name.xml
The Benchmark Tool demonstrates how to use the benchmark_app to estimate deep learning inference performance on supported devices.
Regards,
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page