- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In the last paragraph of the low precision optimization guide, quantization-aware training is mentioned whereby a user can get an accurate optimized model that can be converted to OpenVINO Intermediate Representation. However, no other details are provided. Therefore, are we wrong to assume that OpenVINO can successfully convert our INT8 models? Has anyone had success with this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kester,
Thanks for reaching out to us. Quantization-Aware Training using OpenVINO compatible training frameworks supports models written on TensorFlow QAT or PyTorch NNCF, with optimization extensions.
The NNCF is a PyTorch-based framework that supports a wide range of Deep Learning models for various use cases. It also implements quantization-aware training supporting different quantization modes and settings, and supports various compression algorithms, including Quantization, Binarization, Sparsity, and Filter Pruning.
When fine-tuning finishes, the accurate optimized model can be exported to ONNX format, which can then be used by Model Optimizer to generate Intermediate Representation (IR) files and subsequently inferred with OpenVINO™ Inference Engine.
The following article, ‘Enhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO Toolkit’, contains more information, and is available at the following link:
The following paper, Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework, explains the steps needed to use NNCF to implement optimization methods using supported training samples as well as through integration into the custom training code.
https://www.intel.com/content/www/us/en/artificial-intelligence/posts/openvino-nncf.html
The training samples are available at:
https://github.com/openvinotoolkit/nncf/tree/develop/examples
Regards,
Munesh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kester,
Thanks for reaching out to us. Quantization-Aware Training using OpenVINO compatible training frameworks supports models written on TensorFlow QAT or PyTorch NNCF, with optimization extensions.
The NNCF is a PyTorch-based framework that supports a wide range of Deep Learning models for various use cases. It also implements quantization-aware training supporting different quantization modes and settings, and supports various compression algorithms, including Quantization, Binarization, Sparsity, and Filter Pruning.
When fine-tuning finishes, the accurate optimized model can be exported to ONNX format, which can then be used by Model Optimizer to generate Intermediate Representation (IR) files and subsequently inferred with OpenVINO™ Inference Engine.
The following article, ‘Enhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO Toolkit’, contains more information, and is available at the following link:
The following paper, Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework, explains the steps needed to use NNCF to implement optimization methods using supported training samples as well as through integration into the custom training code.
https://www.intel.com/content/www/us/en/artificial-intelligence/posts/openvino-nncf.html
The training samples are available at:
https://github.com/openvinotoolkit/nncf/tree/develop/examples
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @Munesh_Intel. I'll look into everything that you provided.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kester,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page