Intel® Compute Stick
Discussions Regarding Intel® Compute Sticks and Cards
Announcements
All support for Intel® NUC 7 - 13 systems is transitioning to ASUS. Read more.
548 Discussions

I am able to run a FP32 model on NCS2. Does it really support FP32?

Kewin
Beginner
888 Views

Hello, everyone

 

I was trying to run a face detection model (face-detection-adas-0001) inference on NCS2 bought recently and noticed that I am able to run a FP32 optimized model.  As far as I know NCS2 can run only FP16 models, but when I load the FP32 model no error occurs (the NCS2 is connected and work and the device is set MYRIAD). Does NCS2 really support FP32?

The models I use are from official open model zoo, I tried both FP16 and FP32. Also, the benchmark for both FP16 and FP32 models shows the same speed metrics (they differ for NCS2 and PC, where the former one is slower), though they are definitely different (their size differs and the xml file states the corresponding precision).  At this point I'm confused about optimization. Also I converted EAST text detection model to IR and had the same issue (both with FP32 on NCS2 and optimization)

I use the latest version of OpenVINO 2021.4, Python 3.6 with installed requirements

 

Could you please help me solve this issue?

Thank you in advance!

0 Kudos
2 Replies
Peh_Intel
Moderator
844 Views

Hi Kewin,


According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
799 Views

Hi Kevin,


This thread will no longer be monitored since we have provided answers. If you need any additional information from Intel, please submit a new question. 



Regards,

Peh


0 Kudos
Reply