Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!
For the latest information on Intel’s response to the Log4j/Log4Shell vulnerability, please see Intel-SA-00646
26870 Discussions

[SOLVED] Inconsistent results from zheev() between serial and Open MPI



I have been encountering some inconsistent results when calling zheev() from a program I am attempting to parallelize with MPI. Zheev(), which calculates eigenvectors/values of a complex 2D Hermitian matrix, returns eigenvalues in an array that are very comparable between serial and parallel versions (only off by about 13-8). However, the eigenvectors it returns can vary considerably and strangely; some elements will have an imaginary or real part that is reasonably close, while the other is nonsensical, or both parts of the element could be quite different. I created a small test case (that just reads the matrix from a file) to illustrate my issue that exhibits this issue:

program zheevtest_serial
   implicit none
   double complex, allocatable :: matrix (:,:), work(:)
   integer :: msize, i, j, lwork, info
   double precision, allocatable :: rwork(:), eigv(:)

   read(7,*) msize

   lwork = msize*msize

   do i=1,msize
     do j=1,msize
       read(7,*) matrix(j,i)

   call zheev('N','U',msize,matrix,msize,eigv,work,-1,rwork,info)
   lwork = work(1)

   call zheev('V','U',msize,matrix,msize,eigv,work,lwork,rwork,info)

   write(8,*) matrix

end program zheevtest_serial

The MPI parallelized version of this test case differs only where each process writes to a different file at the end, and the usual MPI setup calls.

This discrepancy exists between serial and MPI with >1 processor (1 processor gives identical results to serial). I am using the modules fftw/3.3.0-intel-openmpi, openmpi/1.4.4-intel-v12.1, and intel/15.0.1 to compile, with flags -xHost and -mkl.

I've attached the matrix if anyone would like to try and replicate this issue. If anyone could give me some insight as to what might be going on, or just more detail as to how zheev() operates, that would be greatly appreciated.


After looking into the code further, an MPI call I was making was truncating the imaginary part of the last matrix element, explaining the difference.


0 Kudos
0 Replies