#define SEED 1
#define BRNG VSL_BRNG_MCG31
#define METHOD VSL_RNG_METHOD_GAUSSIAN_ICDF
VSLStreamStatePtr stream;
double a=0.0,sigma=0.3;
errcode = vslNewStream( &stream, BRNG, SEED );
// simulate one path by generating 5000 variates.
double r[5000];
vdRngGaussian( METHOD, stream, N, r, a, sigma );
for (int j=0;j<4999;j++){
// simulate random walk using the variates
}
}
I would like to parallelize the outer loop. My question is: is it safe to call vdRngGaussian from multiple threads? And am I guaranteed to have independant variates?
The second scenario would be to parallelize multiple simulations. In this case I would like to do one full simulation per thread and I need to to generate independant variates for all simulations. In this case my question would be what is the approach to generating the random variates? Should I use one rng per thread and initialize them with different seeds? I have been told that this is not the best way of getting independant variates. Another method would be to use the leapfrog method. What is best?
anwar
Link Copied
Hello Anwar,
Intel MKL Random Number Generators support parallel Monte Carlo simulations by means of the following methodologies:
1. Block-splitting which allows you to split the original sequence into k non-overlapping blocks, where k - number of independent streams. The first stream would generate the random numbers x(1),...,x(nskip), the second stream would generate the numbers x(nskip+1),...,x(2nskip), etc. Service function vslSkipAheadStream( stream, nskip ) is way to use this methodology in your app.
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k
vslNewStream( &stream
vslSkipAheadStream( stream
}
2. Leap-Frogging that allows you to split the original sequence into k disjoint subsequences, where k - number of independent streams. The first stream would generate the random numbers x(1), x(k+1),x(2k+1),... the second stream would generate the numbers x(2), x(k+2),x(2k+2),...,etc. Service function vslLeapfrogStream( stream, k, nstreams ) will help you to parallelize the application by means of this approach.
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k
vslNewStream( &stream
vslLeapfrogStream( stream
}
3. Generator family which supports parallel Monte Carlo simulations by design: Whichmann-Hill Basic Random Number Generator (BRNG) helps you to get up to 273 independent random streams, and Mersenne Twsiter MT2203 - up to 6024 independent random streams
#define nstreams 6024
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k< nstreams; k++ )
{
vslNewStream( &stream
}
All those techniques will help you to have streams of indepedent variates. As soon as you create nstreams random streams you can call safely call MKL Gaussian or any other RNG with k-th stream stream
1. To parallelize one simulation in your code you might use Block-Splitting or LeapFrog methodologies with MCG31, MCG59 or other MKL BRNG which supports them. Before the simulations please check that BRNG period addresses needs of your application in random numbers.
2. Please, avoid using the same VSL RNG per one thread initialized with different seeds. It can results in the biased output of the Monte-Carlo simulations. Instead, please, use one of the methodologies described above.
3. To parallelize multiple simulations you can use one of the methodologies above. Considerations related to BRNG period are applicable to this scenario too. Also, please, keep inmind the following aspects:
- to apply SkipAhead technique you will need to compute number of variates per one simulation, parameter nskip. If this number is not available (or difficult to compute) for some reasons in advance you might want to use Leapfrog technique
- Leapfrog technique, however, is recommended when number of independent streams k is fairly small
- Use of WH or MT2203 BRNG family could probably be the suitable option for your needs
You can find the details behind each methodology in VSLNotes, http://software.intel.com/sites/products/documentation/hpc/mkl/vsl/vslnotes.pdf, section 7.3.5 and in Intel MKL Manual, Chapter Statistical Functions, Section Random Number Generators, Service Routines, LeapFroagStream and SkipAheadStream and evaluate which of the methodologies fits your environment in the best way.
Also, it might be helpful to have a look at KB article that considers aspects for seed choice for of MKL BRNG initialization, http://software.intel.com/en-us/articles/initializing-Intel-MKL-BRNGs
Please, let me know if this addresses your questions. Feel free to ask more questions on parallelization of Monte Carlo simulations with MKL RNGs, we will be happy to help.
Best,
Andrey
Hello Anwar,
Intel MKL Random Number Generators support parallel Monte Carlo simulations by means of the following methodologies:
1. Block-splitting which allows you to split the original sequence into k non-overlapping blocks, where k - number of independent streams. The first stream would generate the random numbers x(1),...,x(nskip), the second stream would generate the numbers x(nskip+1),...,x(2nskip), etc. Service function vslSkipAheadStream( stream, nskip ) is way to use this methodology in your app.
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k
vslNewStream( &stream
vslSkipAheadStream( stream
}
2. Leap-Frogging that allows you to split the original sequence into k disjoint subsequences, where k - number of independent streams. The first stream would generate the random numbers x(1), x(k+1),x(2k+1),... the second stream would generate the numbers x(2), x(k+2),x(2k+2),...,etc. Service function vslLeapfrogStream( stream, k, nstreams ) will help you to parallelize the application by means of this approach.
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k
vslNewStream( &stream
vslLeapfrogStream( stream
}
3. Generator family which supports parallel Monte Carlo simulations by design: Whichmann-Hill Basic Random Number Generator (BRNG) helps you to get up to 273 independent random streams, and Mersenne Twsiter MT2203 - up to 6024 independent random streams
#define nstreams 6024
VSLStreamStatePtr stream[nstreams];
int k;
for ( k=0; k< nstreams; k++ )
{
vslNewStream( &stream
}
All those techniques will help you to have streams of indepedent variates. As soon as you create nstreams random streams you can call safely call MKL Gaussian or any other RNG with k-th stream stream
1. To parallelize one simulation in your code you might use Block-Splitting or LeapFrog methodologies with MCG31, MCG59 or other MKL BRNG which supports them. Before the simulations please check that BRNG period addresses needs of your application in random numbers.
2. Please, avoid using the same VSL RNG per one thread initialized with different seeds. It can results in the biased output of the Monte-Carlo simulations. Instead, please, use one of the methodologies described above.
3. To parallelize multiple simulations you can use one of the methodologies above. Considerations related to BRNG period are applicable to this scenario too. Also, please, keep inmind the following aspects:
- to apply SkipAhead technique you will need to compute number of variates per one simulation, parameter nskip. If this number is not available (or difficult to compute) for some reasons in advance you might want to use Leapfrog technique
- Leapfrog technique, however, is recommended when number of independent streams k is fairly small
- Use of WH or MT2203 BRNG family could probably be the suitable option for your needs
You can find the details behind each methodology in VSLNotes, http://software.intel.com/sites/products/documentation/hpc/mkl/vsl/vslnotes.pdf, section 7.3.5 and in Intel MKL Manual, Chapter Statistical Functions, Section Random Number Generators, Service Routines, LeapFroagStream and SkipAheadStream and evaluate which of the methodologies fits your environment in the best way.
Also, it might be helpful to have a look at KB article that considers aspects for seed choice for of MKL BRNG initialization, http://software.intel.com/en-us/articles/initializing-Intel-MKL-BRNGs
Please, let me know if this addresses your questions. Feel free to ask more questions on parallelization of Monte Carlo simulations with MKL RNGs, we will be happy to help.
Best,
Andrey
Andrey,
Thanks a lot for your feedback and please forgive me for this delayed response. Indeed your explanations clarified things a lot. I will now play around with the various generators using Intel TBB and come back to you with further questions :)
Anwar
#include
#include
#include
#include
#include
class parallel_pi {
trng::uniform01_dist<> u; // random number distribution
long in;
const trng::yarn2 &r;
public:
void operator()(const tbb::blocked_range
trng::yarn2 r_local; // local copy of random number engine
r_local.jump(2*range.begin()); // jump ahead
for (long i=range.begin(); i!=range.end(); ++i) {
double x=u(r_local), y=u(r_local); // choose random x and y coordinates
if (x*x+y*y<=1.0) // i s point in circle ?
++in; // increase thread local counter
} //for
}//operator
// join threads and counters
void join(const parallel_pi &other) {
in+=other.in;
}
long in_circle() const {
return in;
}
parallel_pi(const trng::yarn2 &r) : r, in(0) {
}
parallel_pi(const parallel_pi &other, tbb::split) : r(other.r), in(0) {
}
};
int main(int argc, char *argv[]) {
tbb::task_scheduler_init init;
const long samples=1000000l; // total number of points in square
trng::yarn2 r; // random number engine
parallel_pi pi; // functor for parallel reduce
// parallel MC computation of pi
tbb::parallel_reduce(tbb::blocked_range
// print result
std::cout << "pi = " << 4.0*pi.in_circle()/samples << std::endl;
return EXIT_SUCCESS;
}
Hi Anwar,
I have quick suggestions you might want to have a look at.
1.You have the number of samples whose processing you want to split across threads, samples = 1M. I assume that you can check the maximal number of cores for your CPU, say ncores. So, you can assign the processing of range of size k=samples/ncore to one core.
For TBB blocked range you can specify the grainsize parameter: your blocked_range will not be split into two sub-ranges if the size of the range less than grainsize (please, see Section 4.2.1 of Intel TBB Manual for more details).
If you set the grain size to kyouwill avoid undersubcription (that is, the number of logical threads is not enough to keep physical threads working) and oversubsription (number of logical threads exceeds number of physical threads). On the next step youwould associate VSL Random stream with a thread in the operator()(const tbb::blocked_range
a) Determine number of cores on CPU ncore
b)Create ncore MKL Random Streams by applying one of VSL parallelization techniques
c)Construct object of type parallel_pi by providing number of cores, number of samples, and array of random streams
d)Compute index idx of the block being processed and obtain random numbers from the stream indexed idx. Use nsamples/ncore for grainsize in blocked_range.
2. Also, please, have a look at Chapter "Catalog of Recommended task Patterns". The methodology described there would allow creating and spawning k tasks; each task would process the random numbers obtained from specific VSL Random Stream created by using one of parallelization techniques.
Please, let me know how those approaches work for you.
Best,
Andrey
#include
#include "mkl_vsl.h"
#include "tbb/task_scheduler_init.h"
#include "tbb/task_group.h"
class PiCalculator {
public:
long numpoints;
long in;
VSLStreamStatePtr stream;
PiCalculator(long numpoints, long& in, VSLStreamStatePtr stream) :
numpoints(numpoints), in(in), stream(stream) {
}
void operator()(){
double variates[2*numpoints]; //we need 2 random variates per point
// crashes here EXC_BAD_ACCESS: Could not access memory
vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, 2*numpoints, variates, 0.0, 1.0);
for(int i=0; i
double x=variates;
double y=variates[numpoints+i];
if(x*x+y*y<=1.0) ++in;
}
};
};
int main() {
int errorcode;
const long samples = 1000000l;
int seed = 1;
int nstreams = 2;
VSLStreamStatePtr stream[nstreams];
for (int i=0; i
{
errorcode = vslNewStream( &stream, VSL_BRNG_MCG31, seed );
if(errorcode){
return 1;
}
errorcode = vslLeapfrogStream( stream, i, nstreams );
if(errorcode){
return 1;
}
}
tbb::task_scheduler_init init;
tbb::task_group group;
long result1 = 0;
long result2 = 0;
group.run(PiCalculator(samples/2, result1, stream[0]));
group.run(PiCalculator(samples/2, result2, stream[1]));
group.wait();
std::cout << "pi = " << 4.0*(result1+result2)/samples << std::endl;
for(int i=0;i
vslDeleteStream(&stream);
return 0;
}
void operator()( )
{
const int block_size = 1024;
double variates[2*block_size];
int nblocks, ntail, i, j;
nblocks = numpoints / block_size;
ntail = numpoints - nblocks * block_size;
for( j = 0; j < nblocks; j++ )
{
vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, 2*block_size, variates, 0.0, 1.0 );
for( i = 0; i < block_size; i++ )
{
double x = variates[2*i + 0];
double y = variates[2*i + 1];
if(x*x+y*y<=1.0) ++(in);
}
}
vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, 2*ntail, variates, 0.0, 1.0 );
for( i = 0; i < ntail; i++ )
{
double x = variates[2*i + 0];
double y = variates[2*i + 1];
if(x*x+y*y<=1.0) ++(in);
}
}
public:
long& in;
#include
#include "mkl_vsl.h"
#include "tbb/task_scheduler_init.h"
#include "tbb/task_group.h"
#include "tbb/tick_count.h"
class PiCalculator {
public:
long numpoints;
long& in;
VSLStreamStatePtr stream;
PiCalculator(long numpoints, long& in, VSLStreamStatePtr stream) :
numpoints(numpoints), in(in), stream(stream) {
in = 0; // make sure to initialize to zero.
}
void operator()() {
const int block_size = 2048;
double variates[2 * block_size];
int nblocks, ntail, i, j;
nblocks = numpoints / block_size;
ntail = numpoints - nblocks * block_size;
for (j = 0; j < nblocks; j++) {
vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, 2 * block_size,variates, 0.0, 1.0);
for (i = 0; i < block_size; i++) {
double x = variates[2 * i + 0];
double y = variates[2 * i + 1];
if (x * x + y * y <= 1.0)
++(in);
}
}
vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, 2 * ntail, variates,0.0, 1.0);
for (i = 0; i < ntail; i++) {
double x = variates[2 * i + 0];
double y = variates[2 * i + 1];
if (x * x + y * y <= 1.0)
++(in);
}
};
};
int main() {
int errorcode;
const long samples = 10000000000l;
int seed = 1;
int tasks = 50;
VSLStreamStatePtr stream[tasks];
for (int i = 0; i < tasks; i++) {
errorcode = vslNewStream(&stream, VSL_BRNG_MCG59, seed);
if (errorcode) {
return 1;
}
errorcode = vslLeapfrogStream(stream, i, tasks);
if (errorcode) {
return 1;
}
}
tbb::task_scheduler_init init;
tbb::task_group group;
long results[tasks];
long samplesPerTasks = samples/tasks;
tbb::tick_count t0 = tbb::tick_count::now();
for (int i = 0; i < tasks; i++) {
group.run(PiCalculator(samplesPerTasks, results, stream));
}
group.wait();
tbb::tick_count t1 = tbb::tick_count::now();
long result = 0;
for(int i=0;i
result += results;
}
std::cout << "pi = " << 4.0 * result / samples << std::endl;
std::cout << "time : " << (t1-t0).seconds();
for (int i = 0; i < tasks; i++)
vslDeleteStream(&stream);
return 0;
}
#include #include "omp.h" #include "mkl_vsl.h" int main() { int seed = 1; int tid; int numthreads = omp_get_num_procs(); VSLStreamStatePtr stream[numthreads]; int variatesPerThread = 5000; double variates[variatesPerThread]; double a = 1.0; double sigma = 0.2; int result = 0; for (int i = 0; i < numthreads; i++) { int errorCode=0; errorCode = vslNewStream(&stream, VSL_BRNG_MCG59, seed); if(errorCode){ printf("vslNewStream failed\n"); return 1; } errorCode = vslLeapfrogStream(stream, i, numthreads); if(errorCode){ printf("vslLeapfrogStream failed\n"); return 1; } } #pragma omp parallel private(tid, variates) reduction(+:result) { tid = omp_get_thread_num(); // generate the random samples and do something interesting. vdRngGaussian(VSL_RNG_METHOD_GAUSSIAN_ICDF, stream[tid],variatesPerThread, variates, a, sigma); // reduce the result result = result+tid; } printf("result is: %d", result); for (int i = 0; i < numthreads; i++) vslDeleteStream(&stream); }
For more complete information about compiler optimizations, see our Optimization Notice.