- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We have trained efficientnet model for classification. While I am testing the model (FP16 model) in NCS2 and CPU, the output labels shown are different in both the devices. FP32 and FP16 model execution in CPU shows the same results only. Could you please support us?
Looking forward for your reply.
Thank you,
Suchithra
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Suchithra,
Thank you for reaching out.
The results in FP16 and FP32 tested on CPU could be the same as they are using the same processor. On the other hand, if you compare the Myriad output to CPU it may differ slightly but classification results should be close. I have tried the demo_squeezenet demo for CPU (FP16 and FP32) vs Myriad (FP16) and the results are close enough (see below). May I ask how results differ on your end?
MYRIAD X FP16
Top 10 results:
Image <C:\openvino space\openvino\deployment_tools\demo\\car.png>
classid probability label
------- ----------- -----
817 0.6708984 sports car, sport car
479 0.1922607 car wheel
511 0.0936890 convertible
436 0.0216064 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751 0.0075760 racer, race car, racing car
656 0.0049667 minivan
717 0.0027428 pickup, pickup truck
581 0.0019779 grille, radiator grille
468 0.0014219 cab, hack, taxi, taxicab
661 0.0008636 Model T
CPU FP16
Top 10 results:
Image <C:\openvino space\openvino\deployment_tools\demo\\car.png>
classid probability label
------- ----------- -----
817 0.6853030 sports car, sport car
479 0.1835197 car wheel
511 0.0917197 convertible
436 0.0200694 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751 0.0069604 racer, race car, racing car
656 0.0044177 minivan
717 0.0024739 pickup, pickup truck
581 0.0017788 grille, radiator grille
468 0.0013083 cab, hack, taxi, taxicab
661 0.0007443 Model T
CPU FP32
Top 10 results:
Image <C:\openvino space\openvino\deployment_tools\demo\\car.png>
classid probability label
------- ----------- -----
817 0.6851521 sports car, sport car
479 0.1835010 car wheel
511 0.0918672 convertible
436 0.0200784 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751 0.0069436 racer, race car, racing car
656 0.0044373 minivan
717 0.0024768 pickup, pickup truck
581 0.0017814 grille, radiator grille
468 0.0013093 cab, hack, taxi, taxicab
661 0.0007501 Model T
If you have more questions, feel free to post here again.
Regards,
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have trained model using PyTorch framework. We converted the trained .pth model into ONNX and then used Intel OpenVINO model optimizer for converting ONNX model to IR format.
Then deployed the IR model (FP16 model) on MYRIAD X VPU(NCS2) and CPU(Intel Pentium N42000, Core i7 and Core i5).
It has been found that our trained EfficientNetB4 model, outputs different result on NCS2 and CPU. We would like to know why this happens? While we have tested different models on both the devices, we didn't find any difference.
Please find the table attached with sample of outputs that varies, for your reference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Suchithra,
Thank you for your reply.
Unfortunately, there are no attached files, so the table you sent cannot be seen. Could you send it again?
Regarding your results, could you please provide the following for us to test on our end:
- The frozen model you are using and the model optimizer command used to convert the IR files.
- The program/sample you used to test.
- Expected input/output samples.
Best regards,
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have a similar issue with a Resnetv2 model. Where the output from the tensorflow model does not match the NCS2 output sufficiently for some inputs. But when settings 'VPU_HW_STAGES_OPTIMIZATION': 'NO' both outputs are sufficiently close.
Is there a way to share the model non-publicly ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I, tim,
Thanks for your response. I have sent you a PM in order for you to share the model for us to test it.
Regards,
Luis
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi David,
Thank you for your reply.
RESULTS FP16
IMAGE NAME CPU NCS2
b96b518596b3.png 2.7880216 0.52001953
21abd36095a1.png 3.9605513 0.26293945
789434d095d1.png 3.8480554 0.24133301
3c78bfca247b.png 3.8562613 1.3671875
As you can see the difference between the results of CPU and NCS2 using the same model.
- I trained the model using PyTorch and for OpenVINO optimization , I converted it into onnx model. Using this onnx model, I converted the model to IR format. The command used is given below.
python3 mo_onnx.py --input_model INPUT_ONNX_MODEL --output_dir OUTPUT_DIR
Thanks,
Suchithra
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi V S, Suchithra,
Thank you for the information, that is a very interesting behavior. Would it be possible that you share your model in ONNX (or PyTorch and how you converted to ONNX), your sample program and a sample input image (with the expected result). I can send you a PM in case you don't want to share it publicly.
Regards,
Luis
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page