- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have been encountering some inconsistent results when calling zheev() from a program I am attempting to parallelize with MPI. Zheev(), which calculates eigenvectors/values of a complex 2D Hermitian matrix, returns eigenvalues in an array that are very comparable between serial and parallel versions (only off by about 13-8). However, the eigenvectors it returns can vary considerably and strangely; some elements will have an imaginary or real part that is reasonably close, while the other is nonsensical, or both parts of the element could be quite different. I created a small test case (that just reads the matrix from a file) to illustrate my issue that exhibits this issue:
program zheevtest_serial implicit none double complex, allocatable :: matrix (:,:), work(:) integer :: msize, i, j, lwork, info double precision, allocatable :: rwork(:), eigv(:) open(7,file='pre-zheev_mat_ser.dat') read(7,*) msize allocate(matrix(msize,msize)) allocate(eigv(msize)) allocate(work(msize*msize)) allocate(rwork(3*msize-2)) lwork = msize*msize do i=1,msize do j=1,msize read(7,*) matrix(j,i) enddo enddo close(7) call zheev('N','U',msize,matrix,msize,eigv,work,-1,rwork,info) lwork = work(1) call zheev('V','U',msize,matrix,msize,eigv,work,lwork,rwork,info) open(8,file='out_ser.dat') write(8,*) matrix close(8) end program zheevtest_serial
The MPI parallelized version of this test case differs only where each process writes to a different file at the end, and the usual MPI setup calls.
This discrepancy exists between serial and MPI with >1 processor (1 processor gives identical results to serial). I am using the modules fftw/3.3.0-intel-openmpi, openmpi/1.4.4-intel-v12.1, and intel/15.0.1 to compile, with flags -xHost and -mkl.
I've attached the matrix if anyone would like to try and replicate this issue. If anyone could give me some insight as to what might be going on, or just more detail as to how zheev() operates, that would be greatly appreciated.
-Duncan
Edit:
After looking into the code further, an MPI call I was making was truncating the imaginary part of the last matrix element, explaining the difference.
Link Copied
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page