<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi Stefan, in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938613#M2668</link>
    <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;The problem is not related to gfortran.&amp;nbsp; The libmpigf.so library is used both for gfortran and the Intel® MPI Library.&amp;nbsp; I am able to get the same behavior here.&amp;nbsp; I'll check with the developers, but I'm expecting that MPI_IN_PLACE may not be correctly handled in ILP64.&lt;/P&gt;
&lt;P&gt;As a note, the MPI Fortran module is not supported for ILP64 programming in the Intel® MPI Library.&amp;nbsp; Please see Section 3.5.6 of the Intel® MPI Library Reference Manual for more information on ILP64 support.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
    <pubDate>Mon, 22 Apr 2013 17:18:50 GMT</pubDate>
    <dc:creator>James_T_Intel</dc:creator>
    <dc:date>2013-04-22T17:18:50Z</dc:date>
    <item>
      <title>ILP64 model: using MPI_IN_PLACE in MPI_REDUCE seems to yield wrong results</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938610#M2665</link>
      <description>&lt;P&gt;hi,&lt;/P&gt;

&lt;P&gt;i am using the ifort compiler v.&amp;nbsp;13.0.1 20121010 together with Intel MPI v.4.1.0.024 on an x86_64 Linux cluster. Using 64-bit integers as default (ILP64 model) in my little Fortran program i obtain wrong results when i use MPI_IN_PLACE in MPI_REDUCE calls (both for integer and real(8)):&lt;/P&gt;

&lt;P&gt;my code is as follows:&lt;/P&gt;

&lt;P&gt;[fortran]&lt;/P&gt;

&lt;P&gt;program test&lt;BR /&gt;
	include "mpif.h"&lt;BR /&gt;
	! use mpi&lt;BR /&gt;
	integer :: iraboof&lt;BR /&gt;
	integer :: mytid, numnod, ierr&lt;BR /&gt;
	real(8) :: rraboof&lt;BR /&gt;
	&lt;BR /&gt;
	mytid = 0&lt;BR /&gt;
	! initialize MPI environment&lt;BR /&gt;
	call mpi_init(ierr)&lt;BR /&gt;
	call mpi_comm_rank(mpi_comm_world, mytid,ierr)&lt;BR /&gt;
	call mpi_comm_size(mpi_comm_world, numnod,ierr)&lt;BR /&gt;
	&lt;BR /&gt;
	iraboof = 1&lt;BR /&gt;
	if (mytid == 0) then&lt;BR /&gt;
	call mpi_reduce(MPI_IN_PLACE, iraboof, 1, mpi_integer, mpi_sum, 0, mpi_comm_world, ierr)&lt;BR /&gt;
	else&lt;BR /&gt;
	call mpi_reduce(iraboof, 0 , 1, mpi_integer, mpi_sum, 0, mpi_comm_world, ierr)&lt;BR /&gt;
	end if&lt;BR /&gt;
	if (mytid == 0) then&lt;BR /&gt;
	print *, 'raboof mpi reduce', iraboof, numnod&lt;BR /&gt;
	end if&lt;BR /&gt;
	rraboof = 1.0d0&lt;BR /&gt;
	if (mytid == 0) then&lt;BR /&gt;
	call mpi_reduce(MPI_IN_PLACE, rraboof, 1, mpi_real8 , mpi_sum, 0, mpi_comm_world, ierr)&lt;BR /&gt;
	else&lt;BR /&gt;
	call mpi_reduce(rraboof, 0 , 1, mpi_real8 , mpi_sum, 0, mpi_comm_world, ierr)&lt;BR /&gt;
	end if&lt;BR /&gt;
	if (mytid == 0) then&lt;BR /&gt;
	print *, 'raboof mpi reduce', rraboof, numnod&lt;BR /&gt;
	end if&lt;BR /&gt;
	call mpi_finalize(ierr)&lt;BR /&gt;
	end program&lt;/P&gt;

&lt;P&gt;[/fortran]&amp;nbsp;&lt;/P&gt;

&lt;P&gt;Compilation is done with&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;mpiifort -O3 -i8 impi.F90&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;

&lt;P&gt;It compiles and links fine&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;ldd ./a.out&lt;/P&gt;

&lt;P&gt;linux-vdso.so.1 =&amp;gt; (0x00007ffff7893000)&lt;BR /&gt;
	libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x0000003357c00000)&lt;BR /&gt;
	libmpi_ilp64.so.4 =&amp;gt; /global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib/libmpi_ilp64.so.4 (0x00002ad1a4a3f000)&lt;BR /&gt;
	libmpi.so.4 =&amp;gt; /global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib/libmpi.so.4 (0x00002ad1a4c69000)&lt;BR /&gt;
	libmpigf.so.4 =&amp;gt; /global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib/libmpigf.so.4 (0x00002ad1a528e000)&lt;BR /&gt;
	librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x0000003358800000)&lt;BR /&gt;
	libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x0000003358000000)&lt;BR /&gt;
	libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x0000003357800000)&lt;BR /&gt;
	libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x0000003357400000)&lt;BR /&gt;
	libgcc_s.so.1 =&amp;gt; /lib64/libgcc_s.so.1 (0x0000003359c00000)&lt;BR /&gt;
	/lib64/ld-linux-x86-64.so.2 (0x0000003357000000)&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;

&lt;P&gt;Running the program I however obtain&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;mpirun -np 4 ./a.out&lt;BR /&gt;
	raboof mpi reduce 3 4&lt;BR /&gt;
	raboof mpi reduce 3.00000000000000 4&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;

&lt;P&gt;whereas it should produce&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;mpirun -np 4 ./a.out&amp;nbsp;&lt;BR /&gt;
	raboof mpi reduce 4 4&lt;BR /&gt;
	raboof mpi reduce 4.00000000000000 4&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;

&lt;P&gt;which is what I also obtain with other MPI libraries.&lt;/P&gt;

&lt;P&gt;I would appreciate any comment/help.&amp;nbsp;&lt;/P&gt;

&lt;P&gt;with best regards,&lt;/P&gt;

&lt;P&gt;stefan&lt;/P&gt;

&lt;P&gt;p.s.: when i use the F90-interface ("use mpi") i obtain the following warnings at compile time:&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;mpiifort -O3 -i8 impi.F90&lt;BR /&gt;
	impi.F90(9): warning #6075: The data type of the actual argument does not match the definition. [IERR]&lt;BR /&gt;
	call mpi_init(ierr)&lt;BR /&gt;
	-----------------^&lt;BR /&gt;
	impi.F90(10): warning #6075: The data type of the actual argument does not match the definition. [MYTID]&lt;BR /&gt;
	call mpi_comm_rank(mpi_comm_world, mytid,ierr)&lt;BR /&gt;
	--------------------------------------^&lt;BR /&gt;
	impi.F90(10): warning #6075: The data type of the actual argument does not match the definition. [IERR]&lt;BR /&gt;
	call mpi_comm_rank(mpi_comm_world, mytid,ierr)&lt;BR /&gt;
	--------------------------------------------^&lt;BR /&gt;
	impi.F90(11): warning #6075: The data type of the actual argument does not match the definition. [NUMNOD]&lt;BR /&gt;
	call mpi_comm_size(mpi_comm_world, numnod,ierr)&lt;BR /&gt;
	--------------------------------------^&lt;BR /&gt;
	impi.F90(11): warning #6075: The data type of the actual argument does not match the definition. [IERR]&lt;BR /&gt;
	call mpi_comm_size(mpi_comm_world, numnod,ierr)&lt;BR /&gt;
	---------------------------------------------^&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;

&lt;P&gt;and a crash at runtime&lt;/P&gt;

&lt;P&gt;[bash]&lt;/P&gt;

&lt;P&gt;mpirun -np 4 ./a.out&lt;BR /&gt;
	Fatal error in PMPI_Reduce: Invalid buffer pointer, error stack:&lt;BR /&gt;
	PMPI_Reduce(1894): MPI_Reduce(sbuf=MPI_IN_PLACE, rbuf=0x693828, count=1, MPI_INTEGER, MPI_SUM, root=0, MPI_COMM_WORLD) failed&lt;BR /&gt;
	PMPI_Reduce(1823): sendbuf cannot be MPI_IN_PLACE&lt;BR /&gt;
	Fatal error in PMPI_Reduce: Invalid buffer pointer, error stack:&lt;BR /&gt;
	PMPI_Reduce(1894): MPI_Reduce(sbuf=MPI_IN_PLACE, rbuf=0x693828, count=1, MPI_INTEGER, MPI_SUM, root=0, MPI_COMM_WORLD) failed&lt;BR /&gt;
	PMPI_Reduce(1823): sendbuf cannot be MPI_IN_PLACE&lt;BR /&gt;
	Fatal error in PMPI_Reduce: Invalid buffer pointer, error stack:&lt;BR /&gt;
	PMPI_Reduce(1894): MPI_Reduce(sbuf=MPI_IN_PLACE, rbuf=0x693828, count=1, MPI_INTEGER, MPI_SUM, root=0, MPI_COMM_WORLD) failed&lt;BR /&gt;
	PMPI_Reduce(1823): sendbuf cannot be MPI_IN_PLACE&lt;/P&gt;

&lt;P&gt;[/bash]&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 12:23:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938610#M2665</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T12:23:51Z</dc:date>
    </item>
    <item>
      <title>Your ldd result showing that</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938611#M2666</link>
      <description>&lt;P&gt;Your ldd result showing that you linked against the gfortran compatible library looks like a problem.&amp;nbsp; This shouldn't happen if you use mpiifort consistently.&amp;nbsp; The gfortran and ifort libraries can't coexist. Adding -# to the mpiifort command should give a lot more detail about what goes into the script which will pass over to ld.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 15:39:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938611#M2666</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2013-04-22T15:39:14Z</dc:date>
    </item>
    <item>
      <title>dear Tim,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938612#M2667</link>
      <description>&lt;P&gt;dear Tim,&lt;/P&gt;
&lt;P&gt;thanks for your immediate reply. please find below the output for compiling my program (the one above in the file impi.F90) with your suggested flag:&lt;/P&gt;
&lt;P&gt;[bash]&lt;/P&gt;
&lt;P&gt;mpiifort -i8 -# imi.F90&lt;/P&gt;
&lt;P&gt;[/bash]&lt;/P&gt;
&lt;P&gt;this compilation yields:&lt;/P&gt;
&lt;P&gt;[bash]&lt;/P&gt;
&lt;P&gt;mpiifort -i8 -# impi.F90 &lt;BR /&gt;/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/bin/intel64/fpp \&lt;BR /&gt; -D__INTEL_COMPILER=1300 \&lt;BR /&gt; -D__unix__ \&lt;BR /&gt; -D__unix \&lt;BR /&gt; -D__linux__ \&lt;BR /&gt; -D__linux \&lt;BR /&gt; -D__gnu_linux__ \&lt;BR /&gt; -Dunix \&lt;BR /&gt; -Dlinux \&lt;BR /&gt; -D__ELF__ \&lt;BR /&gt; -D__x86_64 \&lt;BR /&gt; -D__x86_64__ \&lt;BR /&gt; -D_MT \&lt;BR /&gt; -D__INTEL_COMPILER_BUILD_DATE=20121010 \&lt;BR /&gt; -D__INTEL_OFFLOAD \&lt;BR /&gt; -D__i686 \&lt;BR /&gt; -D__i686__ \&lt;BR /&gt; -D__pentiumpro \&lt;BR /&gt; -D__pentiumpro__ \&lt;BR /&gt; -D__pentium4 \&lt;BR /&gt; -D__pentium4__ \&lt;BR /&gt; -D__tune_pentium4__ \&lt;BR /&gt; -D__SSE2__ \&lt;BR /&gt; -D__SSE__ \&lt;BR /&gt; -D__MMX__ \&lt;BR /&gt; -I. \&lt;BR /&gt; -I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/mkl/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/tbb/include \&lt;BR /&gt; -I/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/include/intel64 \&lt;BR /&gt; -I/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/include \&lt;BR /&gt; -I/usr/local/include \&lt;BR /&gt; -I/usr/lib/gcc/x86_64-redhat-linux/4.4.7/include \&lt;BR /&gt; -I/usr/include \&lt;BR /&gt; -4Ycpp \&lt;BR /&gt; -4Ncvf \&lt;BR /&gt; -f_com=yes \&lt;BR /&gt; impi.F90 \&lt;BR /&gt; /tmp/ifortBOT7lB.i90&lt;/P&gt;
&lt;P&gt;/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/bin/intel64/fortcom \&lt;BR /&gt; -D__INTEL_COMPILER=1300 \&lt;BR /&gt; -D__unix__ \&lt;BR /&gt; -D__unix \&lt;BR /&gt; -D__linux__ \&lt;BR /&gt; -D__linux \&lt;BR /&gt; -D__gnu_linux__ \&lt;BR /&gt; -Dunix \&lt;BR /&gt; -Dlinux \&lt;BR /&gt; -D__ELF__ \&lt;BR /&gt; -D__x86_64 \&lt;BR /&gt; -D__x86_64__ \&lt;BR /&gt; -D_MT \&lt;BR /&gt; -D__INTEL_COMPILER_BUILD_DATE=20121010 \&lt;BR /&gt; -D__INTEL_OFFLOAD \&lt;BR /&gt; -D__i686 \&lt;BR /&gt; -D__i686__ \&lt;BR /&gt; -D__pentiumpro \&lt;BR /&gt; -D__pentiumpro__ \&lt;BR /&gt; -D__pentium4 \&lt;BR /&gt; -D__pentium4__ \&lt;BR /&gt; -D__tune_pentium4__ \&lt;BR /&gt; -D__SSE2__ \&lt;BR /&gt; -D__SSE__ \&lt;BR /&gt; -D__MMX__ \&lt;BR /&gt; -mGLOB_pack_sort_init_list \&lt;BR /&gt; -I. \&lt;BR /&gt; -I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/mkl/include \&lt;BR /&gt; -I/global/apps/intel/2013.1/tbb/include \&lt;BR /&gt; -I/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/include/intel64 \&lt;BR /&gt; -I/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/include \&lt;BR /&gt; -I/usr/local/include \&lt;BR /&gt; -I/usr/lib/gcc/x86_64-redhat-linux/4.4.7/include \&lt;BR /&gt; -I/usr/include \&lt;BR /&gt; "-integer_size 64" \&lt;BR /&gt; -O2 \&lt;BR /&gt; -simd \&lt;BR /&gt; -offload_host \&lt;BR /&gt; -mP1OPT_version=13.0-intel64 \&lt;BR /&gt; -mGLOB_diag_file=/tmp/ifort7GVk2e.diag \&lt;BR /&gt; -mGLOB_source_language=GLOB_SOURCE_LANGUAGE_F90 \&lt;BR /&gt; -mGLOB_tune_for_fort \&lt;BR /&gt; -mGLOB_use_fort_dope_vector \&lt;BR /&gt; -mP2OPT_static_promotion \&lt;BR /&gt; -mP1OPT_print_version=FALSE \&lt;BR /&gt; -mCG_use_gas_got_workaround=F \&lt;BR /&gt; -mP2OPT_align_option_used=TRUE \&lt;BR /&gt; -mGLOB_gcc_version=447 \&lt;BR /&gt; "-mGLOB_options_string=-I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include -I/global/apps/intel/2013.1/impi/4.1.0.024/intel64/include -ldl -i8 -# -L/global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/4.1 -lmpi_ilp64 -lmpi -lmpigf -lmpigi -lrt -lpthread" \&lt;BR /&gt; -mGLOB_cxx_limited_range=FALSE \&lt;BR /&gt; -mCG_extend_parms=FALSE \&lt;BR /&gt; -mGLOB_compiler_bin_directory=/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/bin/intel64 \&lt;BR /&gt; -mGLOB_as_output_backup_file_name=/tmp/ifortK2gIZoas_.s \&lt;BR /&gt; -mIPOPT_activate \&lt;BR /&gt; -mIPOPT_lite \&lt;BR /&gt; -mGLOB_machine_model=GLOB_MACHINE_MODEL_EFI2 \&lt;BR /&gt; -mGLOB_product_id_code=0x22006d91 \&lt;BR /&gt; -mCG_bnl_movbe=T \&lt;BR /&gt; -mGLOB_extended_instructions=0x8 \&lt;BR /&gt; -mP3OPT_use_mspp_call_convention \&lt;BR /&gt; -mP2OPT_subs_out_of_bound=FALSE \&lt;BR /&gt; -mGLOB_ansi_alias \&lt;BR /&gt; -mPGOPTI_value_profile_use=T \&lt;BR /&gt; -mP2OPT_il0_array_sections=TRUE \&lt;BR /&gt; -mP2OPT_offload_unique_var_string=ifort607026576Zo54LN \&lt;BR /&gt; -mP2OPT_hlo_level=2 \&lt;BR /&gt; -mP2OPT_hlo \&lt;BR /&gt; -mP2OPT_hpo_rtt_control=0 \&lt;BR /&gt; -mIPOPT_args_in_regs=0 \&lt;BR /&gt; -mP2OPT_disam_assume_nonstd_intent_in=FALSE \&lt;BR /&gt; -mGLOB_imf_mapping_library=/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/bin/intel64/libiml_attr.so \&lt;BR /&gt; -mIPOPT_obj_output_file_name=/tmp/ifort7GVk2e.o \&lt;BR /&gt; -mIPOPT_whole_archive_fixup_file_name=/tmp/ifortwarchNyvxkL \&lt;BR /&gt; "-mGLOB_linker_version=2.20.51.0.2-5.36.el6 20100205" \&lt;BR /&gt; -mGLOB_long_size_64 \&lt;BR /&gt; -mGLOB_routine_pointer_size_64 \&lt;BR /&gt; -mGLOB_driver_tempfile_name=/tmp/iforttempfilenQtt0t \&lt;BR /&gt; -mP3OPT_asm_target=P3OPT_ASM_TARGET_GAS \&lt;BR /&gt; -mGLOB_async_unwind_tables=TRUE \&lt;BR /&gt; -mGLOB_obj_output_file=/tmp/ifort7GVk2e.o \&lt;BR /&gt; -mGLOB_source_dialect=GLOB_SOURCE_DIALECT_FORTRAN \&lt;BR /&gt; -mP1OPT_source_file_name=impi.F90 \&lt;BR /&gt; -mP2OPT_symtab_type_copy=true \&lt;BR /&gt; /tmp/ifortBOT7lB.i90&lt;/P&gt;
&lt;P&gt;ld \&lt;BR /&gt; /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o \&lt;BR /&gt; /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crti.o \&lt;BR /&gt; /usr/lib/gcc/x86_64-redhat-linux/4.4.7/crtbegin.o \&lt;BR /&gt; --eh-frame-hdr \&lt;BR /&gt; --build-id \&lt;BR /&gt; -dynamic-linker \&lt;BR /&gt; /lib64/ld-linux-x86-64.so.2 \&lt;BR /&gt; -L/global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib \&lt;BR /&gt; -o \&lt;BR /&gt; a.out \&lt;BR /&gt; /global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/lib/intel64/for_main.o \&lt;BR /&gt; -L/global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib \&lt;BR /&gt; -L/global/apps/intel/2013.1/mkl/lib/intel64 \&lt;BR /&gt; -L/global/apps/intel/2013.1/tbb/lib/intel64 \&lt;BR /&gt; -L/global/apps/intel/2013.1/ipp/lib/intel64 \&lt;BR /&gt; -L/global/apps/intel/2013.1/composerxe/lib/intel64 \&lt;BR /&gt; -L/global/hds/home/install/intel/2013.1/composer_xe_2013.1.117/compiler/lib/intel64 \&lt;BR /&gt; -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/ \&lt;BR /&gt; -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64 \&lt;BR /&gt; -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/ \&lt;BR /&gt; -L/lib/../lib64 \&lt;BR /&gt; -L/lib/../lib64/ \&lt;BR /&gt; -L/usr/lib/../lib64 \&lt;BR /&gt; -L/usr/lib/../lib64/ \&lt;BR /&gt; -L/global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib/ \&lt;BR /&gt; -L/global/apps/intel/2013.1/mkl/lib/intel64/ \&lt;BR /&gt; -L/global/apps/intel/2013.1/tbb/lib/intel64/ \&lt;BR /&gt; -L/global/apps/intel/2013.1/ipp/lib/intel64/ \&lt;BR /&gt; -L/global/apps/intel/2013.1/composerxe/lib/intel64/ \&lt;BR /&gt; -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../ \&lt;BR /&gt; -L/lib64 \&lt;BR /&gt; -L/lib/ \&lt;BR /&gt; -L/usr/lib64 \&lt;BR /&gt; -L/usr/lib \&lt;BR /&gt; -ldl \&lt;BR /&gt; /tmp/ifort7GVk2e.o \&lt;BR /&gt; --enable-new-dtags \&lt;BR /&gt; -rpath \&lt;BR /&gt; /global/apps/intel/2013.1/impi/4.1.0.024/intel64/lib \&lt;BR /&gt; -rpath \&lt;BR /&gt; /opt/intel/mpi-rt/4.1 \&lt;BR /&gt; -lmpi_ilp64 \&lt;BR /&gt; -lmpi \&lt;BR /&gt; -lmpigf \&lt;BR /&gt; -lmpigi \&lt;BR /&gt; -lrt \&lt;BR /&gt; -lpthread \&lt;BR /&gt; -Bstatic \&lt;BR /&gt; -lifport \&lt;BR /&gt; -lifcore \&lt;BR /&gt; -limf \&lt;BR /&gt; -lsvml \&lt;BR /&gt; -Bdynamic \&lt;BR /&gt; -lm \&lt;BR /&gt; -Bstatic \&lt;BR /&gt; -lipgo \&lt;BR /&gt; -lirc \&lt;BR /&gt; -Bdynamic \&lt;BR /&gt; -lpthread \&lt;BR /&gt; -Bstatic \&lt;BR /&gt; -lsvml \&lt;BR /&gt; -Bdynamic \&lt;BR /&gt; -lc \&lt;BR /&gt; -lgcc \&lt;BR /&gt; -lgcc_s \&lt;BR /&gt; -Bstatic \&lt;BR /&gt; -lirc_s \&lt;BR /&gt; -Bdynamic \&lt;BR /&gt; -ldl \&lt;BR /&gt; -lc \&lt;BR /&gt; /usr/lib/gcc/x86_64-redhat-linux/4.4.7/crtend.o \&lt;BR /&gt; /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crtn.o&lt;/P&gt;
&lt;P&gt;rm /tmp/ifortlibgccyi9h59&lt;BR /&gt;rm /tmp/ifortgnudirs06mNow&lt;BR /&gt;rm /tmp/ifort7GVk2e.o&lt;BR /&gt;rm /tmp/ifortBOT7lB.i90&lt;BR /&gt;rm /tmp/ifortakfVFX.c&lt;BR /&gt;rm /tmp/ifortdashvdk0IZj&lt;BR /&gt;rm /tmp/ifortargC1wikG&lt;BR /&gt;rm /tmp/ifortgas65oTE2&lt;BR /&gt;rm /tmp/ifortK2gIZoas_.s&lt;BR /&gt;rm /tmp/ifortldashv7B4mF7&lt;BR /&gt;rm /tmp/iforttempfilenQtt0t&lt;BR /&gt;rm /tmp/ifortargvFMClQ&lt;BR /&gt;rm /tmp/ifortgnudirsMR2abY&lt;BR /&gt;rm /tmp/ifortgnudirsHeROwk&lt;BR /&gt;rm /tmp/ifortgnudirsDsnJSG&lt;BR /&gt;rm /tmp/ifortldashvJ79Ve3&lt;BR /&gt;rm /tmp/ifortgnudirsXiurBp&lt;BR /&gt;rm /tmp/ifortgnudirsp3WeYL&lt;BR /&gt;rm /tmp/ifortgnudirsmUDkl8&lt;BR /&gt;rm /tmp/ifort7GVk2e.o&lt;/P&gt;
&lt;P&gt;[/bash]&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 15:49:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938612#M2667</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T15:49:52Z</dc:date>
    </item>
    <item>
      <title>Hi Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938613#M2668</link>
      <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;The problem is not related to gfortran.&amp;nbsp; The libmpigf.so library is used both for gfortran and the Intel® MPI Library.&amp;nbsp; I am able to get the same behavior here.&amp;nbsp; I'll check with the developers, but I'm expecting that MPI_IN_PLACE may not be correctly handled in ILP64.&lt;/P&gt;
&lt;P&gt;As a note, the MPI Fortran module is not supported for ILP64 programming in the Intel® MPI Library.&amp;nbsp; Please see Section 3.5.6 of the Intel® MPI Library Reference Manual for more information on ILP64 support.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 17:18:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938613#M2668</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-04-22T17:18:50Z</dc:date>
    </item>
    <item>
      <title>hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938614#M2669</link>
      <description>&lt;P&gt;hi James,&lt;/P&gt;
&lt;P&gt;thanks for your detailed answer. I am looking forward to hear about the feedback from the developers. a similar part of the MPI-parallelized code above constitutes a central piece in a core functionality of a quantum chemistry program package (called "Dirac") where I am contributing developer. It would be great to know that with one of the next releases IntelMPI with the ILP64 model could then be fully supported.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;with best regards,&lt;/P&gt;
&lt;P&gt;stefan&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 20:10:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938614#M2669</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T20:10:03Z</dc:date>
    </item>
    <item>
      <title>Hi Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938615#M2670</link>
      <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;Try compiling and running with -ilp64.&lt;/P&gt;
&lt;P&gt;[plain]mpiifort -ilp64 -O3 test.f90 -o test[/plain]&lt;/P&gt;
&lt;P&gt;[plain]mpirun -ilp64 -n 4 ./test[/plain]&lt;/P&gt;
&lt;P&gt;This works for me.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 20:29:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938615#M2670</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-04-22T20:29:23Z</dc:date>
    </item>
    <item>
      <title>hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938616#M2671</link>
      <description>&lt;P&gt;hi James,&lt;/P&gt;
&lt;P&gt;indeed reduce+MPI_IN_PLACE works with that setup also for me. However, MPI_COMM_SIZE does no longer work:&lt;/P&gt;
&lt;P&gt;[fortran]&lt;/P&gt;
&lt;P&gt;program test&lt;BR /&gt; include "mpif.h"&lt;BR /&gt; integer :: mytid, numnod, ierr&lt;/P&gt;
&lt;P&gt;mytid = 0&lt;BR /&gt; ! initialize MPI environment&lt;BR /&gt; call mpi_init(ierr)&lt;BR /&gt; call mpi_comm_rank(mpi_comm_world, mytid,ierr)&lt;BR /&gt; call mpi_comm_size(mpi_comm_world, numnod,ierr)&lt;/P&gt;
&lt;P&gt;print *, 'mytid, numnod ', mytid, numnod&lt;/P&gt;
&lt;P&gt;call mpi_finalize(ierr)&lt;BR /&gt;end program&lt;/P&gt;
&lt;P&gt;[/fortran]&lt;/P&gt;
&lt;P&gt;Compiling and running&amp;nbsp;the above test program with&amp;nbsp;&lt;/P&gt;
&lt;P&gt;[bash]&lt;/P&gt;
&lt;P&gt;mpiifort -ilp64 -O3 test.F90 &lt;BR /&gt;mpirun -ilp64 -np 4 ./a.out &lt;BR /&gt; mytid, numnod 1 0&lt;BR /&gt; mytid, numnod 0 0&lt;BR /&gt; mytid, numnod 2 0&lt;BR /&gt; mytid, numnod 3 0&lt;/P&gt;
&lt;P&gt;[/bash]&lt;/P&gt;
&lt;P&gt;yields a "0" for the size of the communicator MPI_COMM_WORLD.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any idea what could be wrong?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;
&lt;P&gt;with best regards,&lt;/P&gt;
&lt;P&gt;stefan&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 20:48:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938616#M2671</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T20:48:28Z</dc:date>
    </item>
    <item>
      <title>Hi Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938617#M2672</link>
      <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;So I see.&amp;nbsp; I am able to get the correct results by compiling and linking with -ilp64, but without -i8, and changing the declaration of numnod to integer*8.&amp;nbsp; Let me check with the developers and see what we can do about this.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 21:16:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938617#M2672</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-04-22T21:16:49Z</dc:date>
    </item>
    <item>
      <title>hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938618#M2673</link>
      <description>&lt;P&gt;hi James,&lt;/P&gt;
&lt;P&gt;thanks for your feedback, i get exactly the same now as you described above. what i should maybe emphasize is that i was aiming at a working compilation with 64-bit integers as default size (-i8 or -integer-size 64) which somehow implies the ILP64 model as far as i can see.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What exactly does the [bash]-ilp64[/bash] flag set during compilation? obviously, it does not imply 64-bit default integers in the Fortran code as such. does it only enable linking to the ILP64 Intel libraries?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;
&lt;P&gt;with best regards,&lt;/P&gt;
&lt;P&gt;stefan&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 21:37:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938618#M2673</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T21:37:23Z</dc:date>
    </item>
    <item>
      <title>Hi Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938619#M2674</link>
      <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;Using -ilp64 links to libmpi_ilp64 instead of libmpi.&amp;nbsp; The correct way to utilize this is to compile with -i8, then link and run with -ilp64.&amp;nbsp; However, this is not giving correct results either.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 21:46:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938619#M2674</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-04-22T21:46:52Z</dc:date>
    </item>
    <item>
      <title>hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938620#M2675</link>
      <description>&lt;P&gt;hi James,&lt;/P&gt;
&lt;P&gt;thanks for the clarification and your patience. Let's see what the developers can come up with.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;with best regards,&lt;/P&gt;
&lt;P&gt;stefan&lt;/P&gt;</description>
      <pubDate>Mon, 22 Apr 2013 21:51:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938620#M2675</guid>
      <dc:creator>Stefan_K_2</dc:creator>
      <dc:date>2013-04-22T21:51:25Z</dc:date>
    </item>
    <item>
      <title>Hi Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938621#M2676</link>
      <description>&lt;P&gt;Hi Stefan,&lt;/P&gt;
&lt;P&gt;There are two workarounds for this.&amp;nbsp; The first is to not use MPI_IN_PLACE in a program with -i8.&amp;nbsp; The second is to modify mpif.h.&amp;nbsp; Change&lt;/P&gt;
&lt;P&gt;[plain]&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; INTEGER MPI_BOTTOM, MPI_IN_PLACE, MPI_UNWEIGHTED[/plain]&lt;/P&gt;
&lt;P&gt;to&lt;/P&gt;
&lt;P&gt;[plain]&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; INTEGER*4 MPI_BOTTOM, MPI_IN_PLACE, MPI_UNWEIGHTED[/plain]&lt;/P&gt;
&lt;P&gt;This works for your test program.&amp;nbsp; Try it on your&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Wed, 08 May 2013 13:33:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938621#M2676</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-05-08T13:33:19Z</dc:date>
    </item>
    <item>
      <title>Stefan,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938622#M2677</link>
      <description>&lt;P&gt;Stefan,&lt;/P&gt;

&lt;P&gt;If you're still watching this, how did the workarounds work for your program?&lt;/P&gt;</description>
      <pubDate>Tue, 02 Sep 2014 19:51:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/ILP64-model-using-MPI-IN-PLACE-in-MPI-REDUCE-seems-to-yield/m-p/938622#M2677</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2014-09-02T19:51:44Z</dc:date>
    </item>
  </channel>
</rss>

