Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6300 Discussions

Can not infer I16 input precision with I8 model presicion


I am trying to use I16 input precision with I8 MNIST model precision.

Others input presicion(such as FP32, U8, U16) outputs correct value, but I16 input precision outputs incorrect value(I16 input presicion with FP32 model precision outputs correct value).


I got I8 model by

$python3 -sm -m <FP32 MNIST model>.xml --output-dir <output directory>.


I16 input precision with I8 model precision is not supported?


Thank you for help.

0 Kudos
0 Replies