Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2228 Discussions

Compare gromacs performance built by OneAPI and OpenMPI

Allen_1215
Beginner
649 Views
Hi All,

I attempted to build a GROMACS using Intel OneAPI (mpiicx, mpiicpx, MKL) and OpenMPI (mpicc, mpicxx, fftw3) and then compared their performance using some benchmarks.
However, after running the benchmarks, the performance results were almost identical, with no significant improvement.

Ideally, The GROMACS version built with OneAPI should outperform the one built with OpenMPI.

Note that I only used the Intel CPUs, without any GPU support.


CPU info:
[root@ks-9268458 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Silver 4410Y
BIOS Model name: Intel(R) Xeon(R) Silver 4410Y
Stepping: 7
CPU MHz: 2778.955
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 2048K
L3 cache: 30720K
NUMA node0 CPU(s): 0-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid cldemote pconfig flush_l1d arch_capabilities
[root@ks-9268458 ~]# echo $(nproc)
48


gmx version build from OneAPI:

/mnt/mount_test/gromacs/exec/oneapi/mkl/bin/gmx_mpi --version
MPI startup(): FI_PSM3_UUID was not generated, please set it to avoid possible resources ownership conflicts between MPI processes
GROMACS - gmx_mpi, 2023.3 (-:

Executable: /mnt/mount_test/gromacs/exec/oneapi/mkl/bin/gmx_mpi
Data prefix: /mnt/mount_test/gromacs/exec/oneapi/mkl
Working dir: /root
Command line:
gmx_mpi --version

GROMACS version: 2023.3
Precision: mixed
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: disabled
SIMD instructions: AVX_512
CPU FFT library: Intel MKL version 2024.0.2 Build 20240722
GPU FFT library: none
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /mnt/mount_test/tmp/oneapi/mpi/2021.13/bin/mpiicx IntelLLVM 2024.2.1
C compiler flags: -xCORE-AVX512 -qopt-zmm-usage=high -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /mnt/mount_test/tmp/oneapi/mpi/2021.13/bin/mpiicpx IntelLLVM 2024.2.1
C++ compiler flags: -xCORE-AVX512 -qopt-zmm-usage=high -Wno-reserved-identifier -Wno-missing-field-initializers -Wno-pass-failed -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-source-uses-openmp -Wno-c++17-extensions -Wno-documentation-unknown-command -Wno-covered-switch-default -Wno-switch-enum -Wno-extra-semi-stmt -Wno-weak-vtables -Wno-shadow -Wno-padded -Wno-reserved-id-macro -Wno-double-promotion -Wno-exit-time-destructors -Wno-global-constructors -Wno-documentation -Wno-format-nonliteral -Wno-used-but-marked-unused -Wno-float-equal -Wno-conditional-uninitialized -Wno-conversion -Wno-disabled-macro-expansion -Wno-unused-macros -Wno-unsafe-buffer-usage -Wno-cast-function-type-strict -fiopenmp -O3 -DNDEBUG
BLAS library: Intel MKL version 2024.0.2 Build 20240722
LAPACK library: Intel MKL version 2024.0.2 Build 20240722


gmx version build from OpenMPI:

gmx_mpi --version

GROMACS version: 2023.3
Precision: mixed
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: disabled
SIMD instructions: AVX_512
CPU FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
GPU FFT library: none
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /mnt/mount_test/openmpi/bin/mpicc GNU 9.2.1
C compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /mnt/mount_test/openmpi/bin/mpicxx GNU 9.2.1
C++ compiler flags: -fexcess-precision=fast -funroll-all-loops -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -Wno-cast-function-type-strict -fopenmp -O3 -DNDEBUG
BLAS library: External - detected on the system
LAPACK library: External - detected on the system



OneAPI version:
2024.2.1

/mnt/mount_test/tmp/oneapi/mpi/2021.13/bin/mpiicx --version
Intel(R) oneAPI DPC++/C++ Compiler 2024.2.1 (2024.2.1.20240711)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /mnt/mount_test/tmp/oneapi/compiler/2024.2/bin/compiler
Configuration file: /mnt/mount_test/tmp/oneapi/compiler/2024.2/bin/compiler/../icx.cfg



/mnt/mount_test/tmp/oneapi/mpi/2021.13/bin/mpiicpx --version
Intel(R) oneAPI DPC++/C++ Compiler 2024.2.1 (2024.2.1.20240711)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /mnt/mount_test/tmp/oneapi/compiler/2024.2/bin/compiler
Configuration file: /mnt/mount_test/tmp/oneapi/compiler/2024.2/bin/compiler/../icpx.cfg



OpenMPI:
mpicc --version
gcc (GCC) 9.2.1 20191120 (Red Hat 9.2.1-2)
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


mpicxx --version
g++ (GCC) 9.2.1 20191120 (Red Hat 9.2.1-2)
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.



Gromacs version:
2023.3



I reference this page to build and execute: 




I used the benchmark which comes from:
0384_topol.tpr which I renamed was generated from "water-cut1.0_GMX50_bare/0384"


cmd and output:

/mnt/mount_test/gromacs/exec/openmpi/fftw/bin/gmx_mpi mdrun -pin on -v --noconfout -nsteps 10000 -resetstep 5000 -ntomp 16 -s /mnt/mount_test/gromacs/benchmark//0384_topol.tpr -dlb yes
========
Benchmark: 0384_topol.tpr
Using 1 MPI process
Using 16 OpenMP threads

Core t (s) Wall t (s) (%)
Time: 1436.725 89.795 1600.0
(ns/day) (hour/ns)
Performance: 9.624 2.494


=================

/mnt/mount_test/gromacs/exec/oneapi/mkl/bin/gmx_mpi mdrun -pin on -v --noconfout -nsteps 10000 -resetstep 5000 -ntomp 16 -s /mnt/mount_test/gromacs/benchmark//0384_topol.tpr -dlb yes
========
Benchmark: 0384_topol.tpr
Using 1 MPI process
Using 16 OpenMP threads

Core t (s) Wall t (s) (%)
Time: 1502.450 93.903 1600.0
(ns/day) (hour/ns)
Performance: 9.203 2.608


Is there something or a setting that I'm overlooking causing the performance that's preventing the performance from improving significantly?


Many thanks,

Best regards
0 Kudos
4 Replies
TobiasK
Moderator
447 Views

@Allen_1215 

GROMACS is a special case, almost all hot kernels are hand coded and there is not much for the compiler to optimize.

I would, however, recommend to not use OMP threads but just MPI

0 Kudos
Allen_1215
Beginner
378 Views

Hi @TobiasK 

Thank you for your response. While waiting, I continued studying and analyzing related issues. I also found another benchmark set here, which provides many free benchmarks. The benchBFC and benchBFI tests gave me the idea that the water_GMX50_bare benchmark might be too lightweight to effectively assess GROMACS builds with IntelMPI and OpenMPI. To address this, I adjusted the .mdp file and regenerated the benchmark data.

In particular, I modified pme.mdp as follows:

nstcalcenergy = 100 ; !autogen => nstcalcenergy = 1 ; !autogen

 

Regenerated the name of benchmark: *TI.tpr

 

Based on this adjustment, I noticed a more pronounced performance difference. I think that more intensive or complex calculations in GROMACS benchmarks may highlight the performance advantages of using OneAPI. (I'm not sure if this is correct; anyone with more knowledge about GROMACS is welcome to discuss)

 

/mnt/mount_test/gromacs/exec/openmpi/fftw/bin/gmx_mpi mdrun -pin on -v --noconfout -ntomp 16 -s /mnt/mount_test/gromacs/benchmark/waterTI/0384_TI.tpr -dlb yes -nsteps 3000 -resetstep 1000
========
Benchmark: 0384_TI.tpr
Using 1 MPI process
Using 16 OpenMP threads

Core t (s) Wall t (s) (%)
Time: 846.390 52.899 1600.0
(ns/day) (hour/ns)
Performance: 6.536 3.672

/mnt/mount_test/gromacs/exec/oneapi/mkl/bin/gmx_mpi mdrun -pin on -v --noconfout -ntomp 16 -s /mnt/mount_test/gromacs/benchmark/waterTI/0384_TI.tpr -dlb yes -nsteps 3000 -resetstep 1000
========
Benchmark: 0384_TI.tpr
Using 1 MPI process
Using 16 OpenMP threads

Core t (s) Wall t (s) (%)
Time: 743.222 46.451 1600.0
(ns/day) (hour/ns)
Performance: 7.444 3.224


In other benchmarks which has the same phenomenon

 

I would, however, recommend to not use OMP threads but just MPI

=> Could you explain the reason? Because with the OMP threads supporting, the performance is better

 

 
Many thanks,
Best regards

 

0 Kudos
TobiasK
Moderator
356 Views

@Allen_1215 

I would, however, recommend to not use OMP threads but just MPI

=> Could you explain the reason? Because with the OMP threads supporting, the performance is better

I did not run some GROMACS benchmarks, but last time I checked MPI only parallelization was still superior to OpenMP only parallelization. In your case, running only with OpenMP you even don't need Intel MPI nor OpenMPI. For single node only runs GROMACS did also offer an internal MPI implementation. Please ask the GROMACS developers for guidance on how to run the benchmarks. 

0 Kudos
Allen_1215
Beginner
299 Views

Hi @TobiasK 

 

Thank you for your input, and I understand your point. You are correct regarding the current case on a single node without using the MPI process. However, we aim to deploy GROMACS on multiple nodes with the MPI process in the future. For now, we are experimenting on a single node to evaluate performance improvements. We have confirmed that using DPC++ to build GROMACS  is efficient.

 

Thank you for your help, and have a great day.

Best regards

0 Kudos
Reply