Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development Tools (Compilers, Debuggers, Profilers & Analyzers)
- Intel® Fortran Compiler
- Incorrect result of mpi_reduce over real(16) sums. (2019)

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted

artu72

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
02:25 AM

1 View

I have found that MPI_REDUCE does not perform correctly sum reduction over real(16) variables.

Here is a simple code:

program testred16 use mpi_f08 implicit none integer :: me,np real(16) :: voltq,voltq0 real(8) :: voltd,voltd0 ! ! initialize mpi and get the rank and total number of precesses ! call mpi_init call mpi_comm_rank(mpi_comm_world,me) call mpi_comm_size(mpi_comm_world,np) ! ! determine total volume of active computational domain and send to the master ! voltq = 1.0q0 voltd = 1.0d0 write(*,*) 'voltq is',voltq,'in rank',me write(*,*) 'voltd is',voltd,'in rank',me voltq0 = 0.0q0 voltd0 = 0.0d0 call mpi_reduce(voltq,voltq0,1,mpi_real16,mpi_sum,0,mpi_comm_world) call mpi_reduce(voltd,voltd0,1,mpi_real8, mpi_sum,0,mpi_comm_world) if(me.eq.0) then write(*,*) 'voltq0 (16):',voltq0 write(*,*) 'voltd0 ( 8):',voltd0 endif call mpi_finalize end program

I have compiled it by issuing the following command:

mpiifort -o test-mpi-real-16 test-mpi-real-16.f90 -check all -traceback -O0 -debug -warn all

Here are some results:

$ mpiexec -np 2 ./test-mpi-real-16 voltq is 1.00000000000000000000000000000000 in rank 1 voltd is 1.00000000000000 in rank 1 voltq is 1.00000000000000000000000000000000 in rank 0 voltd is 1.00000000000000 in rank 0 voltq0 (16): 1.00000000000000000000000000000000 voltd0 ( 8): 2.00000000000000 $ mpiexec -np 4 ./test-mpi-real-16 voltq is 1.00000000000000000000000000000000 in rank 1 voltq is 1.00000000000000000000000000000000 in rank 2 voltd is 1.00000000000000 in rank 2 voltq is 1.00000000000000000000000000000000 in rank 3 voltd is 1.00000000000000 in rank 3 voltd is 1.00000000000000 in rank 1 voltq is 1.00000000000000000000000000000000 in rank 0 voltd is 1.00000000000000 in rank 0 voltq0 (16): 1.00000000000000000000000000000000 voltd0 ( 8): 4.00000000000000 $ mpiexec -np 8 ./test-mpi-real-16 voltq is 1.00000000000000000000000000000000 in rank 1 voltd is 1.00000000000000 in rank 1 voltq is 1.00000000000000000000000000000000 in rank 2 voltd is 1.00000000000000 in rank 2 voltq is 1.00000000000000000000000000000000 in rank 4 voltd is 1.00000000000000 in rank 4 voltq is 1.00000000000000000000000000000000 in rank 6 voltd is 1.00000000000000 in rank 6 voltq is 1.00000000000000000000000000000000 in rank 7 voltd is 1.00000000000000 in rank 7 voltq is 1.00000000000000000000000000000000 in rank 3 voltd is 1.00000000000000 in rank 3 voltq is 1.00000000000000000000000000000000 in rank 5 voltd is 1.00000000000000 in rank 5 voltq is 1.00000000000000000000000000000000 in rank 0 voltd is 1.00000000000000 in rank 0 voltq0 (16): 1.00000000000000000000000000000000 voltd0 ( 8): 8.00000000000000

The reduction of real(16) variable is wrong, whereas real(8) reduction is right. I encountered same error in previous versions (2017,2018), but by issuing the environment variable I_MPI_ADJUST_REDUCE = 1, it was fixed. Now I cannot recover exact result whatsever value I set (or leaving it unset).

Accepted Solutions

Highlighted

Juergen_R_

Valued Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
03:15 AM

1 View

I think there is also a dedicated MPI forum here on Intel. Maybe you should post this issue there.

3 Replies

Highlighted

Juergen_R_

Valued Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
03:15 AM

2 Views

I think there is also a dedicated MPI forum here on Intel. Maybe you should post this issue there.

Highlighted
##

Jump to solution

artu72

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
04:10 AM

1 View

Quote:Juergen R. wrote:

Juergen R. wrote:

I think there is also a dedicated MPI forum here on Intel. Maybe you should post this issue there.

I have posted the issue. Perhaps you may close this topic here.

Highlighted
##

Jump to solution

cruise__lina

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-25-2019
11:07 PM

1 View

Agree with Juergen R.

Agree with Juergen R.

elma harry

For more complete information about compiler optimizations, see our Optimization Notice.