Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
7222 Discussions

dgetrf+dgetrs works fine - zgetrf+zetrs does not!

jonwes
Beginner
870 Views

Hi!

I am trying to use the LU factorization and subsequent solver in LAPACK for double complex matrices, zgetrf and zgetrs. However, it only works for very small matrices (20x20), for larger matrices it crashes ("access violation"). If I change to a double precision real matrix on the other hand, and use dgetrf and dgetrs, it works exactly as expected, also for large matrices. I have also tried cgetrf+cgetrs, which also seems to work as expected.

I use a win XP PC, Visual Studio.NET 2003, Intel Fortran compiler 10.1.019 and Intel MKL 10.0.2. I have also tried calling zgerf+zgetrs on our cluster (which uses a slighlty older version of the Intel Fortran compiler and MKL), and there it worked just as expected. There's thus little reason to believe that I call the subroutines incorrectly.

If anyone has a clue and can give me help on this matter, it would certainly be much appreciated. The application in mind is the solution of complex linear eqation systems for a number of different right hand sides, with matrix size typically 2000x2000 and 1000 different load vectors. The system matrix is fully populated and non-symmetric.

Thanks in advance!

Best regards,
Jonathan, Sweden.

0 Kudos
1 Reply
jonwes
Beginner
870 Views

With some help from my supervisor the problem has now been resolved. My compiler was set on allocating on stack instead of heap, but after changing this setting zgetrf and zgetrs now seem to be working as they should.

0 Kudos
Reply