- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am developing an MPI program where each processor will offload computation on GPUs. I am using latest version of Intel Parallel Studio 2019 (Update 3 I think), and cuda unified memory system to have a more maintainable code. Unfortunately, results get quite non-deterministic, even with all the synchronizations mechanism active. After some search, I found it might be that Intel MPI doesen't recognize unified memory, so I should fallback to duplicate variables/allocations/tedious copies...
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm interested in the same thing! I'd like to hear some useful advise, thank You!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, I personally ended up making 2 separate functions, the right one being selected at compile time with a flag... I don't know whether Parallel Studio 2020 MPI library supports this, I have not tried, but after making lots of tests, there would really be no point in using "legacy" allocation instead of UVA. In my specific case, it happened to actually be faster for smaller problem sizes, but then got closer and closer as the size grew, being at some point even surpassed. So: easier to write and similar/better when running!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
still can't find this info
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page