Community
cancel
Showing results for 
Search instead for 
Did you mean: 
seong_k_
Beginner
17 Views

Slower parallel run in the modified example, "concurrent_hash_map"

Only changed srand(time(NULL)) instead of srand(2) (in the main() function, Ln.186)

int main( int argc, char* argv[] ) {
    try {
        tbb::tick_count mainStartTime = tbb::tick_count::now();
//        srand(2);
          srand(time(NULL));

After modifying it, some executions got slower in parallel run.
It looks some different set of data caused this.
Can anyone explain why this behavior happens?
I'm newbie of Intel TBB. So please let me know if I misunderstand something.

seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                         
% make
g++ -O2 -DNDEBUG  -o count_strings count_strings.cpp -ltbb -lrt 
./count_strings 
Message from planet 'Ir': Eass, smer is ach brint pan!
Analyzing whole text...
serial run   total = 1000000  unique = 36332  time = 0.197769
parallel run total = 1000000  unique = 36332  time = 0.110404

elapsed time : 0.760568 seconds
seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                            
% ./count_strings
Message from planet 'Yart': Cheng, ster cild tinch att esm!
Analyzing whole text...
serial run   total = 1000000  unique = 36663  time = 0.196841
parallel run total = 1000000  unique = 36663  time = 0.1076

elapsed time : 0.735285 seconds
seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                            
% ./count_strings
Message from planet 'Aft': Ped, ash swir an wup bran!
Analyzing whole text...
serial run   total = 1000000  unique = 36378  time = 0.195981
parallel run total = 1000000  unique = 36378  time = 0.253899

elapsed time : 0.936799 seconds
seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                            
% ./count_strings
Message from planet 'En': Red, fund flal bet hult cas!
Analyzing whole text...
serial run   total = 1000000  unique = 36272  time = 0.196101
parallel run total = 1000000  unique = 36272  time = 0.246536

elapsed time : 0.976936 seconds
seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                            
% ./count_strings
Message from planet 'El': As, jon ess en tan brap!
Analyzing whole text...
serial run   total = 1000000  unique = 36414  time = 0.195219
parallel run total = 1000000  unique = 36414  time = 0.10454

elapsed time : 0.688636 seconds
seankim@extreme: /mnt/users/seankim/tbb44_20151115oss/examples/concurrent_hash_map/count_strings >                                                            

 

 

 

0 Kudos
1 Reply
RafSchietekat
Black Belt
17 Views

With such short runtimes you're going to see more variability from parallel execution, especially without a threads warm-up round before timing starts. Can you also capture one of those random seeds and use it to consistently demonstrate slower performance from the parallel execution? What happens if you repeat the parallel test inside the same program execution, with the same seed value?

Reply