Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Random numbers with coarrays

Arjen_Markus
Honored Contributor I
1,656 Views

Hello,

I have run into a small riddle. I was trying to use the random_number subroutine in a small program using coarrays and discovered that on Linux, with the Intel compiler version I have access to, 18.0.3, the random numbers on each image are the same (well, sometimes I see two sets but not clearly independent sequences). When I try this on Windows (version 18.0.5) , I do get independent random numbers. In both cases I use random_seed to initialise the random number generator. Here is a small demo program:

program chkrandom
    implicit none

    real :: r

    call random_seed
    call random_number( r )

    write(*,*) this_image(), r
end program chkrandom

Typical output on Windows:

           3  0.2110407
           6  0.4997412
           1  0.1226530
           7  0.5719163
           8  0.2361415
           5  0.1315411
           2  0.5468156
           4  0.5232784

Typical output on Linux:

           1  0.8679414
           2  0.8679414
           3  0.8679414
           4  0.8679414
           5  0.8679414
           6  0.8679414
           7  0.8679414
           8  0.8679414

and every once in a while something like this:

           1  0.5012199
           2  0.5980424
           3  0.5012199
           4  0.5012199
           5  0.5012199
           6  0.5012199
           7  0.4597191
           8  0.5012199

Clearly not the most independent random numbers.

Does anyone know how I can get better results?

0 Kudos
15 Replies
JAlexiou
New Contributor I
1,656 Views

The problem is that

call random_seed

uses the tick count (timer) for the seed, and so it uses the same number to seed for all images since they happen at the same time (almost). I think the resolution is of the 100-millisecond range.

 

Try to initialize the random number generator only once (in the first image) and wait for sync before proceeding. I think all images should use the same seed, but this requires testing first.

 

PS. See a related problem on StackOverflow here that shows two separate routines producing the same random numbers because they both seed the random number generator at the same time.

0 Kudos
FortranFan
Honored Contributor III
1,656 Views

@Arjen Markus,

See this thread, Quote #12: https://software.intel.com/en-us/forums/intel-visual-fortran-compiler-for-windows/topic/801082

where Steve Lionel has an attachment with an example on how to work with intrinsics toward "pseudo" RNG in the language.

0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views

Thanks - that was what I was looking for. Using random_seed in a non-trivial manner is always a trifle messy ;)

0 Kudos
Steve_Lionel
Honored Contributor III
1,656 Views

Fortran 2018 adds an intrinsic RANDOM_INIT that provides the option for distinct sequences in different images. That's not yet available in Intel Fortran. RANDOM_INIT also codifies the method of "randomizing" the seed which, prior to F18, was left implementation-dependent.

0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views

That will certainly be an improvement :). Meanwhile, I tested the solution that FortranFan pointed me to. The first random numbers I get then are still quite close to each other. This is the result of asking for two such numbers in each image:

           1  0.6096058      0.3789599
           2  0.6165187      0.9890364
           3  0.6234315      0.5991130
           4  0.6303443      0.2091895
           5  0.6372572      0.8192660
           6  0.6441700      0.4293426
           7  0.6510828      3.9419141E-02
           8  0.6579956      0.6494957

Some runs give more variation in the first numbers ... the second set seems to have enough variation.

0 Kudos
FortranFan
Honored Contributor III
1,656 Views

@Arjen Markus:

Since it appears the issue you face is more pronounced on Linux, perhaps you want to ask on the Linux forum also?

Also, you appear to have noticed enough concern to submit a request at Intel support center.

And can you run Steve Lionel's full example to calculate the value of PI using Monte Carlo approach on the Windows and Linux environment(s)?  Any systematic differences in the errors in the calculated value of PI in your environments, if present possibly due to any PRNG issues, might be revealing to you and Intel support.

0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views

Yes, that is a good idea, well, both are ;).

0 Kudos
FortranFan
Honored Contributor III
1,656 Views

Arjen Markus wrote:

Yes, that is a good idea, well, both are ;).

Another option you may want to consider is to check Intel Fortran PRNG implementation is to modify Steve Lionel's code slightly to "waste some time" before the FIRST invocation of the intrinsic RANDOM_SEED, say calculate the Fibonacci number for Nth sequence where N is some number (10?) raised to the power of  the COARRAY image number, up to a certain upper limit to prevent overflow of course.  That might perturb the Intel PRNG sufficiently to introduce more "randomness" per its default "seeding" approach?

0 Kudos
jimdempseyatthecove
Honored Contributor III
1,656 Views

Another option could be:

Read the timestamp counter RDTSC (__rdtsc in C++)
Reverse the bit order
Scale to the appropriate size

Then use that in some manner to seed your RNG.

The idea is to generate a disbursed starting point for each rank

Jim Dempsey

 

0 Kudos
Steve_Lionel
Honored Contributor III
1,656 Views

While I like the pi approximation sample for demonstrating coarrays in an easy-to-understand fashion, I've been disappointed with the limited accuracy of the result, no matter how many trials I throw at it. I have a suspicion that the PRNG results are sparser than I'd expect, but haven't taken the time to try other RNGs. The one Intel Fortran uses is supposed to be pretty good - it's L'Ecuyer '91 - but there are newer ones that are supposed to be better (Mersenne Twister, for example.)

It's been two and a half years since I submitted that sample, along with explanatory documentation, to the product team. About six months after I retired I asked what had happened to it, and slowly, pieces have been making their way into the samples bundle. In the current one, the code and project are there, (and the idiotic "Hello World" that didn't have any coarrays is gone), but the extensive writeup I provided is painfully abbreviated. I am told that the full text will come soon.

0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views
0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views

@Steve: This type of calculations is always less accurate than one hopes. I have recently experimented with quasi-random numbers as described in http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/ (and other, more classical methods). My first impression was that it was so much better than pseudorandom numbers, getting rather neat results for a (albeit simple) two-dimensional integral with only 100 evaluations, that I wanted to adopt it immediately. Then I continued experimenting and found that a second evaluation of the same integral (different part of the sequence) was far less accurate :(.

I still think they are useful, but, alas, not the exciting solution I was hoping for :).
 

0 Kudos
Juergen_R_R
Valued Contributor II
1,656 Views

Steve Lionel (Ret.) (Blackbelt) wrote:

While I like the pi approximation sample for demonstrating coarrays in an easy-to-understand fashion, I've been disappointed with the limited accuracy of the result, no matter how many trials I throw at it. I have a suspicion that the PRNG results are sparser than I'd expect, but haven't taken the time to try other RNGs. The one Intel Fortran uses is supposed to be pretty good - it's L'Ecuyer '91 - but there are newer ones that are supposed to be better (Mersenne Twister, for example.)

We use L'Ecuyer2002 (cf. below) for shipping seeds on different images of an MPI parallelized Monte Carlo integration, not using coarrays (yet). The original implementation was in C++, and we reimplemented that PRNG in Fortran. It works really well.

P. L'Ecuyer, R. Simard, E.J. Chen, and W.D. Kelton,
  An Object-Oriented Random-Number Package with Many Long Streams and
    Substreams,
  Operations Research, vol. 50, no. 6, pp. 1073-1075, Dec. 2002.

 

0 Kudos
FortranFan
Honored Contributor III
1,656 Views

Arjen Markus wrote:

.. I have recently experimented with quasi-random numbers as described in http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-... (and other, more classical methods). ..

@Arjen Markus,

What "randomness tests" do you apply for the methods you experiment with?  How do they stack up with Intel implementation with their Parallel Studio product(s)?  https://en.wikipedia.org/wiki/Randomness_tests

From my rather simplistic view on the matter, I suggest you notice the tests typically take 32/64 bit inputs: with this in mind, another suggestion for you will be to follow up with @Ronald W Green (Intel) on his other thread at the two Fortran forums on number of images in COARRAY runs and inquire whether it makes to run your code (like in the original post) with a suitable number (say 32) of COARRAY images and whether, as a user of Intel Fortran product, you can expect a resultant coarray of real values to pass the randomness test with the same measure as expected generally with Intel's selection of the PRNG method, whether it be L'Ecuyer '91 or whatever?  And whether you should expect a difference on Windows vs Linux in such tests. 

Intel Software Team if you are reading this: will it be possible for you to improve product documentation in terms of how Intel "validates" its implementation?  Considering the importance of even pseudorandom generators in scientific and technical computing and as a basic tenet of product stewardship, it may serve you well to include better details on PRNG method as well as the tests you run to validate your PRNG implementation.

0 Kudos
Arjen_Markus
Honored Contributor I
1,656 Views

@FortranFan,

my tests are purely visual and do not concern the series as produced - the first random numbers the images produce lie in a rather short interval most of the time. I expected to get numbers more or less covering the whole (0,1) interval. As for the quasi-random numbers, as they are not pseudo-random, such tests are meaningless. They are supposed to give better results because they cover the parameter space more uniformly than pseudo-random numbers do. My remark was inspired by the observation that the first estimate of my integral came much closer to the exact value than the subsequent ones. I still have to examine the results closer (among others, compare with the results from using pseudo-random numbers). Anecdotically spekaing though, a lazy integrand (not too much variation on a short scale) works much better than an integrand where most of the "mass" concentrates in a small part of the space. Understandable of course, but I hoped quasi-random numbers could magically get around such problems ;).

0 Kudos
Reply