Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
1987 Discussions

Parallel CFD simulations with OpenFOAM


Hello! I am trying to run OpenFOAM-6 simulations on the Intel DevCloud cluster. 


I compiled OpenFOAM-6 as described here, and used module load mpi/2021.7.1, the installation was successful. After the compilation, the serial (one core) simulation works fine. However, when I run the parallel simulations with mpirun, the following errors appear. 

$ qsub -l nodes=1:ppn=2 -I

$ module load mpi/2021.7.1

$ which mpirun

$ which icoFoam

$ mpirun -np 2 icoFoam -parallel
WARNING: There is at least non-excluded one OpenFabrics device found,
but there are no active ports detected (or Open MPI was unable to use
them). This is most certainly not what you wanted. Check your
cables, subnet manager configuration, etc. The openib BTL will be
ignored for this job.

Local host: s001-n059
[s001-n059:1507980] *** Process received signal ***
[s001-n059:1507980] Signal: Segmentation fault (11)
[s001-n059:1507980] Signal code: Address not mapped (1)
[s001-n059:1507980] Failing at address: 0x440000e8
[s001-n059:1507980] [ 0] /lib/x86_64-linux-gnu/[0x7f6a8522a090]
[s001-n059:1507980] [ 1] /lib/x86_64-linux-gnu/[0x7f6a84ebbc2b]
[s001-n059:1507980] [ 2] /home/u183499/OpenFOAM/OpenFOAM-6/platforms/linux64GccDPInt32Opt/lib/mpi-system/[0x7f6a851de25d]
[s001-n059:1507980] [ 3] /home/u183499/OpenFOAM/OpenFOAM-6/platforms/linux64GccDPInt32Opt/lib/[0x7f6a85aa0f50]
[s001-n059:1507980] [ 4] icoFoam(+0x25af8)[0x556268e63af8]
[s001-n059:1507980] [ 5] /lib/x86_64-linux-gnu/[0x7f6a8520b083]
[s001-n059:1507980] [ 6] icoFoam(+0x293ae)[0x556268e673ae]
[s001-n059:1507980] *** End of error message ***
[s001-n059:1507982] PMIX ERROR: NO-PERMISSIONS in file ../../../../../../src/mca/common/dstore/dstore_base.c at line 234
[s001-n059:1507982] PMIX ERROR: NO-PERMISSIONS in file ../../../../../../src/mca/common/dstore/dstore_base.c at line 243


In order to debug, I tried a simple MPI hello-world program, and it worked well:

$ cat hello.cpp
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
// Initialize the MPI environment

// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);

// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);

// Print off a hello world message
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);

// Finalize the MPI environment.

$ mpicc hello.cpp -o hello.exe

$ mpirun -n 2 ./hello.exe
Hello world from processor s001-n059, rank 0 out of 2 processors
Hello world from processor s001-n059, rank 1 out of 2 processors


Do you have any idea, how could I make my OpenFOAM installation work with mpirun?

Best regards,


0 Kudos
3 Replies



Thanks for posting in Intel Communities.

Thanks for the information.


From the link provided by you for OpenFOAM-6(, we have seen that it is supported/tested till Ubuntu 18.04. In Intel Devcloud the ubuntu version is 20.04.


Could you please try the supported/tested OpenFOAM version? If your issue persists, please let us know the complete steps you followed while using/compiling the Intel MPI with OpenFOAM.


>>After the compilation, the serial (one core) simulation works fine.

And also, could you please provide the complete steps and expected output while running on a single node?


Thanks & Regards,




Good afternoon, and thank you for the suggestion!


I tried to install the newer OpenFOAM version, which was tested on Ubuntu 20.04. I followed the steps from, all these commands are shown below:

qsub -I
mkdir OpenFOAM
cd OpenFOAM
wget -O - | tar xz
wget -O - | tar xz
mv OpenFOAM-10-version-10 OpenFOAM-10
mv ThirdParty-10-version-10 ThirdParty-10
cd OpenFOAM-10
source etc/bashrc

There was an error, therefore I changed the MPI version on line 89 of file etc/bashrc from SYSTEMOPENMPI to SYSTEMMPI, then I loaded it with module and tried again:

module load mpi/2021.7.1
source etc/bashrc


There is another error, according to its text I added the following environment variables, sourced etc/bashrc again and ran the compilation:

export MPI_ROOT=/glob/development-tools/versions/oneapi/2023.0/oneapi/mpi/2021.8.0
export MPI_ARCH_INC="-isystem $MPI_ROOT/include"
export MPI_ARCH_LIBS="-L$MPI_ROOT/lib -lmpi"
source etc/bashrc
./Allwmake -j >log 2>&1 &


However, I got these errors in the log file: error: conflicting types for ‘scotchyyerror’; have ‘void(const char *)’
parser_yy.c:70:25: note: previous declaration of ‘scotchyyerror’ with type ‘int(const char * const)’
70 | #define yyerror scotchyyerror
| ^~~~~~~~~~~~~
parser_yy.h:83:29: note: in expansion of macro ‘yyerror’
83 | static int yyerror (const char * const);
| ^~~~~~~
parser_yy.c:70:25: error: conflicting types for ‘scotchyyerror’; have ‘int(const char * const)’
70 | #define yyerror scotchyyerror
| ^~~~~~~~~~~~~
parser_yy.y:814:1: note: in expansion of macro ‘yyerror’
814 | yyerror (
| ^~~~~~~ note: previous declaration of ‘scotchyyerror’ with type ‘void(const char *)’
make[2]: *** [Makefile:50: parser_yy.o] Error 1


scotchDecomp.C:36:10: fatal error: scotch.h: No such file or directory
36 | #include "scotch.h"


It means that the scotch library from ThirdParty-10 did not compile successfully. Do you have any ideas about how to solve it? I changed the line 65 in etc/bashrc to use the Intel compiler (because I think that it was possible to compile scotch with icx), however the scotch compilation during the execution of ./Allwmake still used gcc despite my changes.



Thanks for providing the information.

Could you please use the below link where we can find the different libraries that need to be installed and the lines added in the particular files to link Intel compilers?

Could you please try and let us know if you are facing any issues?

Thanks & Regards,