- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am using a parallel code which works fine with small array size on each slave node on a Linux cluster, say 40x40x40. But once I increased the array size, e.g. 80x80x80 on eachnode onthe samecluster, the code would fail with segmentation (sigsegv) error. I doubt it was the limit of the stack size, so set ulimit to unlimited. But the problem was still there. GDB tells me that the problemalways happensin array operations, i,e, A= 0.2B+C where A B C are all arrays. If I change the array operations to do-loops, then the code works for large arrays. This seems really strange to me. Anyone can shed some light? My system is ifort 8.1, redhat 9.0 and mpich 1.2.5. Thanks.
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
This sounds like an ifortbug, probably in its handling of Fortran 90 array notation but it'sa strange situation. The array notation worksfor 40x40x40 arrays but fails for the 80x80x80 arrays. An 80x80x80 array is still pretty small, even at double precision. I don't see why such a small array would cause problems unless you've got dozens of them. How are the arrays declared? Are they allocatable? Static?
Please submit this issue to Intel Premier Support.
Thanks,
Henry

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page