Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

device="MYRIAD" output logits are much larger magnitude and the predictions are incorrect.

Lam__Carson
Beginner
377 Views

I have a custom model that I converted to a .onnx file and used mo.py to convert to a .xml and .bin file. The onnx file is created from pytorch from a FP32 model and converted to FP16 by the model optimizer mo.py using --data_type=FP16

python3 ~/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo.py --input_model /media/carson/New\ Volume/vision/onnx/test3.onnx --scale_values [255,255,255] --input_shape [1,3,1024,1024] --output_dir /media/carson/New\ Volume/vision/NCS/bin_xml/FP16 --data_type FP16 --model_name test3 --disable_fusing --disable_gfusing

when device= "CPU" , FP32,  the inference works fine and I get correct outputs in the form of 2 dimensional vectors, with small numbers in my vector, However when I used my Neural compute stick 2 by making device="MYRIAD", FP16, the output logits are much larger magnitude. and the predictions are incorrect.

corss_check_tool results:

~/intel/computer_vision_sdk_2018.5.445/deployment_tools/inference_engine/tools/ubuntu_18.04/cross_check_tool -i /media/carson/New\ Volume/vision/sample_images/flame.jpg -m /media/carson/New\ Volume/vision/NCS/bin_xml/FP16/test2.xml -ref_d CPU -ref_m /media/carson/New\ Volume/vision/NCS/bin_xml/FP32/test2.xml -d MYRIAD 
InferenceEngine: 
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] CPU vs MYRIAD
    IR for CPU : /media/carson/New Volume/vision/NCS/bin_xml/FP32/test2.xml
    IR for MYRIAD : /media/carson/New Volume/vision/NCS/bin_xml/FP16/test2.xml

[ INFO ] No extensions provided

    API version ............ 1.5
    Build .................. 19154
    Description ....... myriadPlugin

    API version ............ 1.5
    Build .................. lnx_20181004
    Description ....... MKLDNNPlugin
[ INFO ] Inputs detected: 0 
[ INFO ] Statistics will be dumped for 1 layers: 81
[ INFO ] Layer 81 statistics 
    Max absolute difference: 567.802
    Min absolute difference: 394.305
    Max relative difference: 109.226%
    Min relative difference: 94.7126%
    Min reference value: -33.3047
    Min absolute reference value: 31.6979
    Max reference value: -31.6979
    Max absolute reference value: 33.3047
    Min actual value: -599.5
    Min absolute actual value: 361
    Max actual value: 361
    Max absolute actual value: 599.5
                    Devices:         MYRIAD_FP16            CPU_FP32
        Real time, microsec:       159708.961844         3929.086961
[ INFO ] Execution successful
 

Why does this happen and how can I fix it? pytorch's onnx export does not support half precision so I was very happy to see that the model optimizer can do the conversion for me, but does if really work? 

0 Kudos
0 Replies
Reply