Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

pgCC binder for MPI

Raaen__Håvard
Beginner
1,983 Views

Hi

I'm trying to compile the binding libraries for the PGI C++ compiler. In the readme the following is stated:

II.2.2. C++ Binding

To create the Intel(R) MPI Library C++ binding library using the
PGI* C++ compiler, do the following steps:

1. Make sure that the PGI* C++ compiler (pgCC) is in your PATH.

2. Go to the directory cxx

3. Run the command

   # make MPI_INST=<MPI_path> CXX=<C++_compiler> NAME=<name> \
     [ARCH=<arch>] [MIC=<mic option>]

   with

   <MPI_path>        - installation directory of the Intel(R) MPI Library
   <C++_compiler>    - compiler to be used
   <name>            - base name for the libraries and compiler script
   <arch>            - set `intel64` or `mic` architecture, `intel64` is used by
                       default
   <mic option>      - compiler option to generate code for Intel(R) MIC
                       Architecture. Availalbe only when ARCH=mic is set, `-mmic`
                       is used by default in such case

4. Copy the resulting <arch> directory to the Intel(R) MPI Library installation
   directory.

Am I trying to compile with the following statement:

make MPI_INST=/prog/Intel/studioxe2016/compilers_and_libraries_2016.3.210/linux/mpi CXX=pgCC NAME=pgCC

which gives this output:

pgCC  -c -fpic -I/prog/Intel/studioxe2016/compilers_and_libraries_2016.3.210/linux/mpi/intel64/include -Iinclude -Iinclude/intel64 -o initcxx.o initcxx.cxx
"include/intel64/mpichconf.h", line 1362: catastrophic error: cannot open
          source file "nopackage.h"
  #include "nopackage.h"
                        ^

1 catastrophic error detected in the compilation of "initcxx.cxx".
Compilation terminated.
make: *** [initcxx.o] Error 2

Does anybody have an idea where I can get this nopackage.h, or why this error comes?

I have successfully compiled binders for both pgc and pgf90 without any issues.

0 Kudos
1 Reply
James_T_Intel
Moderator
1,983 Views

Do you see the same behavior on a current version of the Intel® MPI Library?  This is from a version which is no longer supported.

0 Kudos
Reply