Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6418 Discussions

Can I compress a IR model from FP32 to FP16?

rrobin
Beginner
1,140 Views

I have models in FP32 that I would like to compress to FP16. Is there a way to do this?

0 Kudos
1 Solution
Zulkifli_Intel
Moderator
1,064 Views

Hello Robin,

 

It is not possible to further transform an IR precision from FP32 to FP16 at the moment. However, our developer is working to enable this feature in future releases. Please refer to the Release Notes for updates.

 

Sincerely,

Zulkifli 


View solution in original post

5 Replies
Zulkifli_Intel
Moderator
1,111 Views

Hello Robin,

Thank you for reaching out to us.

 

Model Optimizer can convert all floating-point weights to FP16 data type. To compress the model, use the --data_type option:

 

mo --input_model INPUT_MODEL --data_type FP16

 

Sincerely,

Zulkifli 


0 Kudos
rrobin
Beginner
1,096 Views

Thanks for the answer. Yes I know about that option. But o would like to convert the IR model. I don't have access to the models previous format, only the IR.

0 Kudos
Zulkifli_Intel
Moderator
1,065 Views

Hello Robin,


Sorry for the misinterpretation, I'm looking into this and will get back to you soon.


Sincerely,

Zulkifli


0 Kudos
Zulkifli_Intel
Moderator
1,065 Views

Hello Robin,

 

It is not possible to further transform an IR precision from FP32 to FP16 at the moment. However, our developer is working to enable this feature in future releases. Please refer to the Release Notes for updates.

 

Sincerely,

Zulkifli 


Zulkifli_Intel
Moderator
1,025 Views

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.


Sincerely,

Zulkifli


0 Kudos
Reply