Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6957 Discussions

real matrix -- eigenvector matrix R -- diagonal eigenvalue matrix L

diedro
Beginner
367 Views
Hi every one,
I have the following problem:
I have a real matrix A=nxn and I would like to compute theeigenvector matrix R and diagonal eigenvalue matrix L with mkl libraries.
I don't know what subroutine I could use.
I have already useddgeev_f95 for other purpose but Id give to me vector and not matrix
thank a lot
0 Kudos
9 Replies
mecej4
Honored Contributor III
367 Views
Diagonal matrices are rarely stored as full matrices, since that would waste a lot of memory. The routine GEEV returns two 1-D arrays containing the real and imaginary parts of the eigenvalues -- a real matrix, even a symmetric one, may have complex eigenvalues. It is trivial to place these eigenvalues on the diagonal of a zeroed-out square complex matrix.

Please state if you know anything more about the matrix A, and explain why you found the output of GEEV unsatisfactory.
0 Kudos
diedro
Beginner
367 Views
hi,
ok, thanks for your help and advices.
The matrix A is a real matrix nxn
The GEEV gives to me the vector of eigenvalues, while I need the matrix of eigenvalues.
this because I need to comute:
Q= 0.5*R*(Id+sign(L-xi*Id))*iR*QL + 0.5*R*(Id-sign(L-xi*Id))*iR*QR;
where Q is n vector,
R is the eigenvector matrix and
L is diagonal eigenvalue matrix.
QR is n vector and QL is n vector.
Id is the nxn identity matrix.
IR isthe inverse of the eigenvector matrix R
Geev does not give me the L in matrix form.
thanks a lot
0 Kudos
mecej4
Honored Contributor III
367 Views
I think that you can compute Q quite easily using only the main diagonal held as a vector, but I must first ask you to clarify a couple of items.

(i) xi is a scalar, yes?

(ii) How is the function sign() defined when it operates on (a) a diagonal matrix and, if this is meaningful, (b) a vector?
0 Kudos
diedro
Beginner
367 Views
hi,
i) xi is a scalar (my adimensinal coordinate)
ii) for example:
L= 1.4142 0
0 -1.4142
then sign(L)= 1 0
0 -1
Thanks a lot
0 Kudos
mecej4
Honored Contributor III
367 Views
I am answering purely from an algorithmic viewpoint, since I do not know the application domain and how the matrices relate to anything physical or conceptual.

Let M1 be a diagonal matrix, with elements

i = +1 if i > and 0 if i <

Let matrix S1 = R.M1. You can compute column j of S1 by multiplying column j of R by j . Then, you can compute the first part of the desired result q = q1 + q2 as

q1 = S1 R-1 q L

You can compute the second part q2 similarly, using M2 = the 1s-complement of M1 and qR in place of M1 and qL, respectively.

Throughout what I wrote, you would compute the produce q L not as a vector-matrix product, but simply by multiplying each element of q by the corresponding . That is, computing q L is an element-by-element product of two vectors, q and diag(L).

Please check the equations, since some browsers may not display subscripts, etc. correctly. For example Firefox 3.16 does not show the inverse ("-1") in the equation for q1 correctly, but IE does.
0 Kudos
diedro
Beginner
367 Views
Thank a lot but what I am doing is a very simple code to use lapack libraris in fortran95 and to use matrix annotation.

This because After that I use the some source code for a more complex problem.
If I had L a could compute Q with somematmul function. this is the main reason because I ask for a different mls-lapack library, to compute
L as a11 0
0 a22
and not as L
a11
a22
what do you think about it?
0 Kudos
mecej4
Honored Contributor III
367 Views
Many procedures of mathematics, in particular matrix operations, are inefficient if implemented on a computer in the most direct and elementary way. For example, the expression of the solution of A.x = b as x = A-1b is fine in a mathematics book or a mathematical derivation. However, computing the inverse explicitly and then multiplying the vector b by the inverse takes twice the computational effort and is often going to give less accurate results than Gaussian elimination with partial pivoting.

Multiplying two diagonal matrices of size n X n takes O(n) operations if done right, and O(n3) operations if the full matrices are used in a MATMULT call.

Please read a book such as Golub and van Loan's Matrix Computations.There are also many good articles on this topic on the Web.

For these reasons, I think that I would be doing you a disservice by telling you how to form a diagonal matrix from a vector containing the main diagonal. Not only is that trivial to do, but doing it is a temptation that I wish to help you avoid.
0 Kudos
diedro
Beginner
367 Views
hi,
I'm sorry for delay. So what do you suggest for:
Q= 0.5*R*(Id+sign(L-xi*Id))*iR*QL + 0.5*R*(Id-sign(L-xi*Id))*iR*QR;
to compute Q,
could I use some lapack libraries? or solve them in anothe way?
thanks a lot
0 Kudos
diedro
Beginner
367 Views
hi,
I'm sorry for delay. So what do you suggest for:
Q= 0.5*R*(Id+sign(L-xi*Id))*iR*QL + 0.5*R*(Id-sign(L-xi*Id))*iR*QR;
to compute Q,
could I use some lapack libraries? or solve them in anothe way?
thanks a lot
0 Kudos
Reply