Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29270 Discussions

MM5 Configure.user settings for CompilerV11.1

nickdaq
Beginner
1,259 Views

I got rid of Portland Group and am now trying to compile MM5 with Intel 11.1. Are there any suggested setting for the configure.user file? So far only failure.

Our OS is Centos 5.3 (Rocks).

Below is what I have tried (via googled suggestions):

RUNTIME_SYSTEM = "linux"
MPP_TARGET=$(RUNTIME_SYSTEM)
### edit the following definition for your system
LINUX_MPIHOME = /opt/intel/impi/3.2.2 # change 'bin' to 'bin64' below
MFC = $(LINUX_MPIHOME)/bin64/mpif77
MCC = $(LINUX_MPIHOME)/bin64/mpicc
MLD = $(LINUX_MPIHOME)/bin64/mpif77
FCFLAGS = -I${LINUX_MPIHOME}/include -I. \\
-convert big_endian -O2 -ip -fno-alias -safe-cray-ptr \\
-mp1 -no-ftz -openmp -DDEC_ALPHA -static
LDOPTIONS = $(FCFLAGS)
LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/lib64 # for intel compilers
CPP = /lib/cpp -C -P -traditional
CFLAGS = -O -DMPI -DDEC_ALPHA -static
CPPFLAGS = -w -I${LINUX_MPIHOME}/include -DDEC_ALPHA -static

----------------------------

Thanks!

0 Kudos
11 Replies
Gergana_S_Intel
Employee
1,259 Views

Hi Nick,

I'm moving this to the Intel Fortran Compilers forums, where the Fortran experts can help you. Could you also provide the error you're seeing? That would be helpful.

I would also suggest using mpiifort instead of mpif77 and mpiicc instead of mpicc. The scripts in your Makefile (mpif77 and mpicc) use the GNU compilers by default.

Regards,
~Gergana

0 Kudos
TimP
Honored Contributor III
1,259 Views
As Gergana hints, any experience with MM5 at Intel probably involves Intel MPI. As you neglect to mention your MPI version, I suppose at best it is an out-dated openmpi built with an out-dated gfortran. If you want to use ifort with any MPI other than Intel MPI, you must rebuild the MPI, preferably using current procedures, such as you will find on the openmpi support site and in the Intel description.
0 Kudos
nickdaq
Beginner
1,259 Views

I'm trying to build with impi version 3.2.2.

Seperately I have successfully compiled MM5 using MPICH2 (Compiled with Intel), but MM5 hangs when I try to run. Also had the same problem with wrf hanging with MPICH2, but WRF runs perfectly fine with impi.

0 Kudos
TimP
Honored Contributor III
1,259 Views

Sorry, I do see your mpi 2.2.2 setting.

I always source the mpivars and ifortvars scripts in my Makefile and cluster job submission, rather than trying to find all the individual path settings. What problems do you encounter?

0 Kudos
nickdaq
Beginner
1,259 Views

Tim,

A sample error, right away in the build:

/opt/intel/impi/3.2.2/bin/mpiicc -c -I/opt/intel/impi/3.2.2/include -DMPI -DRSL_SYNCIO -Dlinux -DSWAPBYTES -O -DIMAX_MAKE= -DJMAX_MAKE= -DMAXDOM_MAKE=6 -DMAXPROC_MAKE=256 -DHOST_NODE=0 -DMON_LOW=1 -DALLOW_RSL_168PT=1 set_padarea.c

/opt/intel/impi/3.2.2/include/mpi.h(35): catastrophic error: #error directive: A wrong version of mpi.h file was included. Check include path.

-----------------

It looks like it may want the include64 directory instead, but when I add " -I${LINUX_MPIHOME}/include64 " to the configure.user file, it has no effect; i.e. somehow it still looks in the /include directory and not include64

I will try sourcing the *vars.sh files within the Makefile, so see if that has an effect. Thanks

0 Kudos
TimP
Honored Contributor III
1,259 Views

Yes, that looks as if an mpi.h may have been found in the CentOS installation. Sourcing mpivars.sh should correct that, when using mpiicc et al.

0 Kudos
nickdaq
Beginner
1,259 Views

I've been sourcing mpivars.sh and I still get the error

0 Kudos
TimP
Honored Contributor III
1,259 Views

One way to check which mpi.h is coming in is to save the pre-processed source code by

ifort -E

or

ifort -keep .......

It should show a complete source path and expansion.

I've been bitten by this kind of thing. Whichever MPI came with CentOS would use conflicting coding for MPI data types, and so the Intel MPI wrapper is doing us a favor by pointing out the error. The Intel mpi wrapper would set up the correct include path, but your Makefile. for example, might over-ride and cause this problem.

If you never use the CentOS MPI, you might find and remove it by some tactic like

rpm -qa | grep -i mpi

then run (as root)

rpm -e (list of rpms to remove)

It's really not good to have any MPI components present on default paths. When you build an open source MPI yourself, you can easily avoid the problem, by configuring it to install in a specific place, like

/usr/local/openmpi1_4/

The Intel Cluster Ready clck enforces MPI path installation separation, among other things. That's definitely back to the cluster/MPC forum territory where you started.

0 Kudos
nickdaq
Beginner
1,259 Views

Despite specifying "include64" in configure.user, the Make seems to want to use "include".When I manually compile the following file, it suceeds.

/opt/intel/impi/3.2.2/bin/mpiicc -c -I/opt/intel/impi/3.2.2/include64 -DMPI -DRSL_SYNCIO -Dlinux -DSWAPBYTES -O -DIMAX_MAKE= -DJMAX_MAKE= -DMAXDOM_MAKE=6 -DMAXPROC_MAKE=256 -DHOST_NODE=0 -DMON_LOW=1 -DALLOW_RSL_168PT=1 set_padarea.c

I think part of the problem is that MM5 mostlypredates 64 bit stuff, so they never thought to pass 64bit folders through the scripts/makefiles. I'll try to tunnel through the layers of scripts to get the 64bit folders passed along correctly.

0 Kudos
nickdaq
Beginner
1,259 Views

I now can get a successful compile. The trick was to specify bin64, include64, and lib64 in the configure.user file, and add "include64" the the 7th line of /MPP/RSL/RSL/makefile.linux

However, now I have the same problem of mm5.mpp hanging when I try to run it (same as when I tried MPICH2 with intel compilation). Sigh.

0 Kudos
nickdaq
Beginner
1,259 Views
Looks like I finally got something running. The trick was to add the -g debug option tothe C and CPPflags. This killssome optimizations, but it does run now.
0 Kudos
Reply