- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, I tried to compile this simple test program:
The program is not very meaningful, but as I said it is just a little test. What I need for my actual code is the ability to set the random generator seed very frequently. When I run the above test program it generates a segmentation fault. If n is set to some relatively small number (say 1000) it all goes well. For some n around 500000 the program sometimes finishes normally, but sometimes I still see a segmentation fault message. Changing the optimization level does not make any difference. I also tried to compile this code with gfortran and it has absolutely no problems with it. Anyone has any idea how to deal with this issue? I use Intel Fortran 11.0 for Linux (64 bit).
[plain]program main integer,parameter :: prec=8 real(prec) r1,r2 integer seedsize,i,n integer,allocatable,dimension(:) :: seed call random_seed(size=seedsize) allocate(seed(seedsize)) seed(1:seedsize)=0 seed(1)=1 call random_seed(put=seed(1:seedsize)) call random_number(r1) n=huge(n) write(*,*) 'seedsize=',seedsize write(*,*) 'n=',n write(*,*) 'r1=',r1 do i=2,n seed(1)=i call random_seed(put=seed(1:seedsize)) call random_number(r2) if (abs(r2-r1)<1.0D-13) then write(*,*) 'almost same random number i=',i,' r2=',r2 endif enddo write(*,*) 'test completed' deallocate(seed) end program main[/plain]
The program is not very meaningful, but as I said it is just a little test. What I need for my actual code is the ability to set the random generator seed very frequently. When I run the above test program it generates a segmentation fault. If n is set to some relatively small number (say 1000) it all goes well. For some n around 500000 the program sometimes finishes normally, but sometimes I still see a segmentation fault message. Changing the optimization level does not make any difference. I also tried to compile this code with gfortran and it has absolutely no problems with it. Anyone has any idea how to deal with this issue? I use Intel Fortran 11.0 for Linux (64 bit).
Link Copied
8 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
could be our old friend stack exhaustion. Try any of these:
compile with -heap-arrays
ulimit -s unlimited before you run
ron
compile with -heap-arrays
ulimit -s unlimited before you run
ron
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - Ronald Green (Intel)
could be our old friend stack exhaustion. Try any of these:
compile with -heap-arrays
ulimit -s unlimited before you run
ron
compile with -heap-arrays
ulimit -s unlimited before you run
ron
That is actually exatly what I was thinking yesterday-- stack overflow. Apparently this is the case, indeed.
Compiling with -heap-arrays is not a very good solution, unfortunately. It only eases the problem but does not eliminate it. I monitored memory usage by the program compiled with this option and it looks like the program "eats" RAM in my machine until the momery is exhausted...
I am no way a computer guru, but I think there is a bug in RANDOM_SEED code in Intel Fortran. There is probably some kind of a memory leak (i.e. some temporary data is allocated and not deallocated in the end) in this routine. Of course, I have never seen the source of RANDOM_SEED, so it is difficult to make any statements here. I am sure thoug that RANDOM_SEED is a very simple procedure that does not involve large arrays. There must be a bug there. After all, gfortran has not problems with that test program (the random number generator there is not the same as used in Intel Fortran though).
Anyhow, I decided that the best way to proceed is to give up that internal random generator for good. There are other reasons to do so as well. I am now using Marsaglia's KISS (Keep It Simple Stupid) generator, similar to the one used in gfortran. Not only this eliminates the problem with stack exhaustion in RANDOM_SEED but it also makes the computations pretty much compiler independent (not platform independent). It is also faster and seems to have a longer period than the random number generator implemented in Intel Fortran.
Sergiy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What happens if you change the call to RANDOM_SEED to use "put=seed"? The array slice reference is not needed in your case. My guess is that it is not RANDOM_SEED itself that is the culprit. On the other hand, I've been unable to reproduce the problem using 11.0 so I am not sure what is going on here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - Steve Lionel (Intel)
What happens if you change the call to RANDOM_SEED to use "put=seed"? The array slice reference is not needed in your case. My guess is that it is not RANDOM_SEED itself that is the culprit. On the other hand, I've been unable to reproduce the problem using 11.0 so I am not sure what is going on here.
I was able to reproduce the problem on all the machines I have tried (Core 2 Duo, Xeon, Itanium2) using both 10.1 and 11.0 version of ifc. Just needed to make n big enough.
However, when I removed the array slice, as you suggested, the problem disappeared. I guess you were right that this is not RANDOM_SEED itself that causes stack exhaustion.
Its just my programming habit to show array bounds explicitly whenever it makes sense (it helps me keep the size in mind), particularly when arrays are dynamically allocated. I never thought it makes any difference for the compiler as long as the slice is without gaps (i.e. step is 1).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - bubin
Its just my programming habit to show array bounds explicitly whenever it makes sense (it helps me keep the size in mind), particularly when arrays are dynamically allocated. I never thought it makes any difference for the compiler as long as the slice is without gaps (i.e. step is 1).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hm... It was very useful to read that blog post. Thanks for giving the link.
There is a little paradox though. One the one hand, one of the most valuable and attractive features of fortran (at least for me) is the ability to use array slices/sections, which simplifies programing considerably and makes things much more readable. On the other hand -- one is advised not to use this feature because of potential problems with the compiler not being able to always make things right (which is apparently the case with my test program above). There is actually a lot of situations when using explicit bounds is unavoidable. Should we go back to the good old do loops then? :-)
Anyhow, thanks again for you help, Steve.
Sergiy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Use array slices when you need them. Don't use them when you don't. In particular, don't do A(:) or A(1:ubound) when you mean the whole array.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - Steve Lionel (Intel)
Use array slices when you need them. Don't use them when you don't. In particular, don't do A(:) or A(1:ubound) when you mean the whole array.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page