I wanted to know why there is no enough documentation on the Post training quantization used by toolkit.
it would greate to know the quantization Scheme(method) used to represent float32 model footprint into int8 format because there are plenty of methods suggested.
We've just recently introduced a completely new INT8 quantization tool within Post-Training Optimization toolkit as a part of latest OpenVINO toolkit 2020.1 build.
Please find all the available documentation for this tool here - http://docs.openvinotoolkit.org/latest/_README.html
You could download OpenVINO toolkit 2020.1 build at https://software.intel.com/en-us/openvino-toolkit/choose-download
Best regards, Max.