Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
1696 Discussions

slowdown when traversing a stl container in parallel (openmp)

holomorph
Beginner
394 Views
Hello,

I am trying to parallelize an existing seriell programm with openMP. Thereby I encountered a severe problem while traversing stl-containers. It seems like traversing a stl container with an (bi-directional) iterator does hurt parallel performance strongly! Using random access iterators [..], the slow down does not occur. But I cannot see the reason for this behaviour...?

I tested following two loops on a quadro core intel machine with different compilers (also an intel compiler). The first loop has a speedup of nearly 4, as expected, whereas the second loop executes more than 10 times slower(!) than the serial code.

1.) this works fine, nice speedup!
vector< double > vec;
for (int k=0;k<100000;k++)
vec.push_back(0.0);

#pragma omp parallel for
for (int i=0; i<4; i++)
{
for (int index=0;index{
//do ntothing
}
}


2.) this is very slow, lags even behind the serial code!

vector< double > vec;
for (int k=0;k<100000;k++)
vec.push_back(0.0);

#pragma omp parallel for
for (int i=0; i<4; i++)
{
vector< double >::const_iterator iter;
for (iter=vec.begin();iter!=vec.end();iter++)
{
//do nothing
}
}

(So, the vector is shared, while the index-variable or the iterator are private to each thread. )

I cannot understand why traversing a stl container may hurt the performance in such a way. Are there some hidden conflicts in the implementations of the stl::vector class? Does somebody know what may cause this problem?

Many thanks,
Christian
0 Kudos
1 Reply
holomorph
Beginner
394 Views
Hi together,

...actually this problem was due to my wrong compiler settings. I tested the speedups during "DEBUG" mode which seems to have caused these weird slowdowns. I am not sure what is the reason for these slowdowns during debugging, but with settings in the "RELEASE" mode the speedup is as expected.


0 Kudos
Reply