- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Kind of Unet architecture works fine on CPU but generate corrupted result on GPU.
We have a kind of Unet which works fine on both devices (CPU and GPU), but when we reduce the number of generated convolutions in each layer by some constant factor, the GPU inference stop generate valid result (start to see garbage like vertical stripes instead of image). There is no errors or worming during model generation or inference, I am wandering if there is some memory alignment related to openCL and GPU that cause to this issue and does not taken into account by open vino?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Andrey,
just wanted to ask you, do you see the "same" Unet output in GPU FP32 and GPU FP16 modes?
My issue with Unet is that GPU in FP16 mode output is incorrect while GPU in FP32 mode is ok.
And MYRIAD FP16 output is ok.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you get GPU issues with both FP16 and FP32 IR?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yes, the issue appear with both FP16/32 IRs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Interesting! So FP32 CPU is good but FP32 GPU has incorrect results.
Any way to repro this?
BTW what is the GPU (or CPU) model?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nikos,
I can't share the code outside of Intel, hope that some one from Intel support internally will assist.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page