Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Are my IMB-MPI1 results correct? they seem to be to high???

andersartig
Beginner
636 Views

Hello,

 

i'm testing on 4 nodes of our hpc-system. They are connected with OmniPath - 100gbit/sec Network.

When i start 

mpirun -genv I_MPI_FABRICS shm:tmi -genv  -host qn12 -n 1 /opt/intel/impi/2018.3.222/bin64/IMB-MPI1 Sendrecv : -host qn11 -n 1 /opt/intel/impi/2018.3.222/bin64/IMB-MPI1

#-----------------------------------------------------------------------------
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
            0         1000         0.88         0.88         0.88         0.00
            1         1000         1.17         1.17         1.17         1.70
            2         1000         1.17         1.17         1.17         3.41
            4         1000         1.15         1.15         1.15         6.97
            8         1000         1.14         1.14         1.14        14.04
           16         1000         1.13         1.13         1.13        28.20
           32         1000         1.14         1.14         1.14        56.04
           64         1000         1.26         1.26         1.26       101.26
          128         1000         1.41         1.41         1.41       181.81
          256         1000         1.35         1.35         1.35       379.82
          512         1000         1.40         1.40         1.40       733.56
         1024         1000         1.70         1.70         1.70      1201.89
         2048         1000         1.88         1.88         1.88      2183.23
         4096         1000         2.67         2.67         2.67      3072.77
         8192         1000         4.16         4.16         4.16      3936.50
        16384         1000         6.24         6.24         6.24      5249.57
        32768         1000        10.00        10.00        10.00      6550.95
        65536          640         5.71         5.71         5.71     22951.32
       131072          320         8.74         8.74         8.74     29979.87
       262144          160        15.23        15.23        15.23     34420.24
       524288           80        33.72        33.72        33.72     31092.59
      1048576           40        88.25        88.25        88.25     23763.59
      2097152           20       217.40       217.40       217.40     19292.85
      4194304           10       419.69       419.81       419.75     19982.04

How i can get 34420 mbyte/sec

 

On the other nodes the result is

 mpirun -genv I_MPI_FABRICS shm:tmi -genv  -host on12 -n 1 /opt/intel/impi/2018.3.222/bin64/IMB-MPI1 Sendrecv : -host on11 -n 1 /opt/intel/impi/2018.3.222/bin64/IMB-MPI1

       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
            0         1000         1.01         1.01         1.01         0.00
            1         1000         1.21         1.21         1.21         1.65
            2         1000         1.16         1.16         1.16         3.46
            4         1000         1.12         1.12         1.12         7.15
            8         1000         1.13         1.13         1.13        14.16
           16         1000         1.12         1.12         1.12        28.50
           32         1000         1.16         1.16         1.16        55.22
           64         1000         1.16         1.16         1.16       109.97
          128         1000         1.43         1.43         1.43       178.54
          256         1000         1.39         1.39         1.39       367.03
          512         1000         1.48         1.48         1.48       692.40
         1024         1000         1.59         1.59         1.59      1291.33
         2048         1000         1.96         1.96         1.96      2088.74
         4096         1000         2.85         2.85         2.85      2874.57
         8192         1000         4.04         4.04         4.04      4056.40
        16384         1000         6.37         6.37         6.37      5142.52
        32768         1000        10.07        10.07        10.07      6509.22
        65536          640         8.37         8.37         8.37     15665.35
       131072          320        15.00        15.00        15.00     17475.97
       262144          160        30.68        30.68        30.68     17088.92
       524288           80        59.58        59.59        59.58     17596.59
      1048576           40       115.33       115.33       115.33     18184.08
      2097152           20       244.81       244.86       244.83     17129.68
      4194304           10       482.49       482.49       482.49     17386.16

What seems ok.

What is going wrong here?

 

(sorry for my bad english)

 

Best regards, 

Axel

 

 

0 Kudos
1 Reply
Sergey_G_Intel1
Employee
636 Views

Axel, please take into account that you are using only 1 rank by: "-n 1". 
You have only one MPI rank which sends a message to himself.

0 Kudos
Reply