- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I wanted to know why there is no enough documentation on the Post training quantization used by toolkit.
it would greate to know the quantization Scheme(method) used to represent float32 model footprint into int8 format because there are plenty of methods suggested.
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Akash.
We've just recently introduced a completely new INT8 quantization tool within Post-Training Optimization toolkit as a part of latest OpenVINO toolkit 2020.1 build.
Please find all the available documentation for this tool here - http://docs.openvinotoolkit.org/latest/_README.html
You could download OpenVINO toolkit 2020.1 build at https://software.intel.com/en-us/openvino-toolkit/choose-download
Best regards, Max.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page