Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Inference Engine Precision Clarification

sphrz
New Contributor I
523 Views

Hello,

OpenVino Version: 2019R3

Inference Engine Version: 2.1

Device: MYRIAD

 

I  am utilizing the Inference Engine and have successfully inferenced my model at FP32 with Inference Engine API. I have been looking at other c++ inference engine examples on the OpenVino GitHub. I am confused on the classification_sample_async's main.cpp file with the following part:

 CNNNetwork network = ie.ReadNetwork(input_model);
...........
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 3. Configure input & output ---------------------------------------------
        // --------------------------- Prepare input blobs -----------------------------------------------------
        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
........

/* Specifying the precision and lauout of input data provided by the user. 
 * This shouldbe called before load of the network to the device. **/
        input_info->setLayout(Layout::NHWC);
        input_info->setPrecision(Precision::U8);
....

Specifically:

input_info->setPrecision(Precision::U8);

If my model was converted into FP32 IR using --data_type FP32 in the Model optimizer what is the above line saying? From my standpoint, it seems like the code is setting the precision to UInt8 when my model (.xml file) is FP32? Why is it Precision::U8 instead of Precision::FP32 or whatever the network precision is?

I read the API reference online and found this on the InferenceEngine::InputInfo Class Reference:

setPrecision():
Changes the precision of the input data provided by the user.

This function should be called before loading the network to the plugin

Parameters
p	A new precision of the input data to set

So when calling setPrecision()  in the code above, how is it changing the precision to the Inference Engine based on the model IR if it is hard coded to UInt8?

Any help is greatly appreciated. Thank you

0 Kudos
1 Solution
Adli
Moderator
463 Views

Hi sphrz,


Thank you for your patience. Most ubiquitous here means widely utilized as compared to other formats. 


setPrecision(Precision::U8) does the internal conversion for input data only. Plugins make it possible for U8 input data to be processed by FP32/FP16 models. It is made for performance acceleration for this specific IE sample.


I suggest you keep the setPrecision() function to U8. Feel free to set FP32/FP16 as the input precision. Please note to check that the plugin able to support the format. https://docs.openvinotoolkit.org/2021.2/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html...


Regards,

Adli


View solution in original post

7 Replies
Adli
Moderator
499 Views

Hi sphrz,

 

U8 is generally preferable for supported input precision as it is the most ubiquitous. For more information, please refer to the following link: https://docs.openvinotoolkit.org/2021.2/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html...

 

Regards,

Adli


sphrz
New Contributor I
493 Views

Hi Adli,

 

Thank you for your response. I had come across the link you attached earlier and have extensively read it before. If it isn't too much to ask, can you clarify what you mean about it being most ubiquitous? My main confusion still is how the inference engine inherits the model's precision (from the IR .xml) file if we set the precision to U8 for example? 

 

Ultimately, should I keep the setPrecision() method to U8 if my IR file is in FP32/FP16?

 

Thank you again!

Adli
Moderator
480 Views

Hi sphrz,

 

My sincere apologies for the delayed response. We are investigating this issue and will get back to you as soon as possible.

 

Regards,

Adli

 

Adli
Moderator
464 Views

Hi sphrz,


Thank you for your patience. Most ubiquitous here means widely utilized as compared to other formats. 


setPrecision(Precision::U8) does the internal conversion for input data only. Plugins make it possible for U8 input data to be processed by FP32/FP16 models. It is made for performance acceleration for this specific IE sample.


I suggest you keep the setPrecision() function to U8. Feel free to set FP32/FP16 as the input precision. Please note to check that the plugin able to support the format. https://docs.openvinotoolkit.org/2021.2/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html...


Regards,

Adli


View solution in original post

sphrz
New Contributor I
456 Views
Hi Adli,

Thank you for getting back to me. I believe I only have one more question if you will allow it about the same topic.

My input data is in the form of FP32 so I set input precision to FP32 and get the accuracy I expect. Will I be missing out on any performance if my data is in FP32 rather U8? U8 seems to produce low accuracy since I am assuming it is truncating the input data to the nearest integer.
Adli
Moderator
435 Views

Hi sphrz,

 

I apologize for the late response. 8-bit computations offer better performance because they allow loading more data into a single processor instruction. Usually, the cost for the significant boost is a reduced accuracy. However, it is proved that an accuracy drop can be negligible.

 

Regards,

Adli


Munesh_Intel
Moderator
415 Views

Hi Damon,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


Reply