I was trying to run a face detection model (face-detection-adas-0001) inference on NCS2 bought recently and noticed that I am able to run a FP32 optimized model. As far as I know NCS2 can run only FP16 models, but when I load the FP32 model no error occurs (the NCS2 is connected and work and the device is set MYRIAD). Does NCS2 really support FP32?
The models I use are from official open model zoo, I tried both FP16 and FP32. Also, the benchmark for both FP16 and FP32 models shows the same speed metrics (they differ for NCS2 and PC, where the former one is slower), though they are definitely different (their size differs and the xml file states the corresponding precision). At this point I'm confused about optimization. Also I converted EAST text detection model to IR and had the same issue (both with FP32 on NCS2 and optimization)
I use the latest version of OpenVINO 2021.4, Python 3.6 with installed requirements
Could you please help me solve this issue?
Thank you in advance!
According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.