- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hello,
I am developing an MPI program where each processor will offload computation on GPUs. I am using latest version of Intel Parallel Studio 2019 (Update 3 I think), and cuda unified memory system to have a more maintainable code. Unfortunately, results get quite non-deterministic, even with all the synchronizations mechanism active. After some search, I found it might be that Intel MPI doesen't recognize unified memory, so I should fallback to duplicate variables/allocations/tedious copies...
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Well, I personally ended up making 2 separate functions, the right one being selected at compile time with a flag... I don't know whether Parallel Studio 2020 MPI library supports this, I have not tried, but after making lots of tests, there would really be no point in using "legacy" allocation instead of UVA. In my specific case, it happened to actually be faster for smaller problem sizes, but then got closer and closer as the size grew, being at some point even surpassed. So: easier to write and similar/better when running!
