Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Conversion of INT8 Models to Optimized IR

kes
Novice
799 Views

In the last paragraph of the low precision optimization guide, quantization-aware training is mentioned whereby a user can get an accurate optimized model that can be converted to OpenVINO Intermediate Representation. However, no other details are provided. Therefore, are we wrong to assume that OpenVINO can successfully convert our INT8 models? Has anyone had success with this?

0 Kudos
1 Solution
Munesh_Intel
Moderator
733 Views

Hi Kester,

Thanks for reaching out to us. Quantization-Aware Training using OpenVINO compatible training frameworks supports models written on TensorFlow QAT or PyTorch NNCF, with optimization extensions.


The NNCF is a PyTorch-based framework that supports a wide range of Deep Learning models for various use cases. It also implements quantization-aware training supporting different quantization modes and settings, and supports various compression algorithms, including Quantization, Binarization, Sparsity, and Filter Pruning.

 

When fine-tuning finishes, the accurate optimized model can be exported to ONNX format, which can then be used by Model Optimizer to generate Intermediate Representation (IR) files and subsequently inferred with OpenVINO™ Inference Engine. 

 

The following article, ‘Enhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO Toolkit’, contains more information, and is available at the following link:

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/open-vino-low-precision-pipeli...

 

The following paper, Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework, explains the steps needed to use NNCF to implement optimization methods using supported training samples as well as through integration into the custom training code.

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/openvino-nncf.html

 

The training samples are available at:

https://github.com/openvinotoolkit/nncf/tree/develop/examples

 

 

Regards,

Munesh


View solution in original post

3 Replies
Munesh_Intel
Moderator
734 Views

Hi Kester,

Thanks for reaching out to us. Quantization-Aware Training using OpenVINO compatible training frameworks supports models written on TensorFlow QAT or PyTorch NNCF, with optimization extensions.


The NNCF is a PyTorch-based framework that supports a wide range of Deep Learning models for various use cases. It also implements quantization-aware training supporting different quantization modes and settings, and supports various compression algorithms, including Quantization, Binarization, Sparsity, and Filter Pruning.

 

When fine-tuning finishes, the accurate optimized model can be exported to ONNX format, which can then be used by Model Optimizer to generate Intermediate Representation (IR) files and subsequently inferred with OpenVINO™ Inference Engine. 

 

The following article, ‘Enhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO Toolkit’, contains more information, and is available at the following link:

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/open-vino-low-precision-pipeli...

 

The following paper, Introducing a Training Add-on for OpenVINO™ toolkit: Neural Network Compression Framework, explains the steps needed to use NNCF to implement optimization methods using supported training samples as well as through integration into the custom training code.

https://www.intel.com/content/www/us/en/artificial-intelligence/posts/openvino-nncf.html

 

The training samples are available at:

https://github.com/openvinotoolkit/nncf/tree/develop/examples

 

 

Regards,

Munesh


kes
Novice
726 Views

Thanks @Munesh_Intel. I'll look into everything that you provided.

0 Kudos
Munesh_Intel
Moderator
694 Views

Hi Kester,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


0 Kudos
Reply