Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Relating openVino and NNCF onnx exporter

timosy
New Contributor I
1,407 Views

The following  is sequence of quantization

 

# Create a quantized model from a pre-trained FP32 model and configuration object.
compress_ctrl, compress_model = create_compressed_model(
model, nncf_config
)
#warnings.filterwarnings("ignore", category=TracerWarning) # Ignore export warnings
#warnings.filterwarnings("ignore", category=UserWarning)
compress_ctrl.export_model(str(outdir)+"model_int8.onnx")

 

When I quantiza a model, I'd like to use option: operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK or opset_version becuase I'd like to avoid error: RuntimeError: Unsupported: ONNX export of operator adaptive_avg_pool2d, since output size ...

 

When I used the option above when converting  FP32 model to onnx, the option allowed me to convert w/o error.  I'd like to use the same option when converting INT8 model.

 

Is this acceptable? I cheked a function  compress_ctrl.export_model, but I could not find a proper  argument....

 

best regards,

 

Labels (3)
0 Kudos
1 Solution
Peh_Intel
Moderator
1,318 Views

Hi timosy,


Yes, you can do so as well. The current positioning of NNCF is such that we target OpenVINO™ as the inference framework for the NNCF-created models and do not guarantee functionality for the other inferencing frameworks.



Regards,

Peh


View solution in original post

0 Kudos
4 Replies
Peh_Intel
Moderator
1,371 Views

Hi timosy,


Thanks for reaching out to us.


Unfortunately, I also not aware whether it is possible to have the operator_export_type option to be added to the compress_ctrl.export_model as it only takes a single parameter specifying the path to the output ONNX model. Anyway, you can post NNCF related questions on NNCF GitHub.



Sincerely,

Peh


0 Kudos
timosy
New Contributor I
1,353 Views

Dear Peh_Intel

Thanks for your comment.

I'm also testing an alternate procedure, it means quantizing a model under onnx framework, then convert it to IR model, where NNCF is not necessary...     

0 Kudos
Peh_Intel
Moderator
1,319 Views

Hi timosy,


Yes, you can do so as well. The current positioning of NNCF is such that we target OpenVINO™ as the inference framework for the NNCF-created models and do not guarantee functionality for the other inferencing frameworks.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
1,267 Views

Hi timosy,


This thread will no longer be monitored since we have provided answers and suggestions. If you need any additional information from Intel, please submit a new question. 



Regards,

Peh


0 Kudos
Reply