Diagonal matrices are rarely stored as full matrices, since that would waste a lot of memory. The routine GEEV returns two 1-D arrays containing the real and imaginary parts of the eigenvalues -- a real matrix, even a symmetric one, may have complex eigenvalues. It is trivial to place these eigenvalues on the diagonal of a zeroed-out square complex matrix.
Please state if you know anything more about the matrix A, and explain why you found the output of GEEV unsatisfactory.
I am answering purely from an algorithmic viewpoint, since I do not know the application domain and how the matrices relate to anything physical or conceptual.
Let M1 be a diagonal matrix, with elements
i = +1 if i > and 0 if i <
Let matrix S1 = R.M1. You can compute column j of S1 by multiplying column j of R by j . Then, you can compute the first part of the desired result q = q1 + q2 as
q1 = S1 R-1 q L
You can compute the second part q2 similarly, using M2 = the 1s-complement of M1 and qR in place of M1 and qL, respectively.
Throughout what I wrote, you would compute the produce q L not as a vector-matrix product, but simply by multiplying each element of q by the corresponding . That is, computing q L is an element-by-element product of two vectors, q and diag(L).
Please check the equations, since some browsers may not display subscripts, etc. correctly. For example Firefox 3.16 does not show the inverse ("-1") in the equation for q1 correctly, but IE does.
Many procedures of mathematics, in particular matrix operations, are inefficient if implemented on a computer in the most direct and elementary way. For example, the expression of the solution of A.x = b as x = A-1b is fine in a mathematics book or a mathematical derivation. However, computing the inverse explicitly and then multiplying the vector b by the inverse takes twice the computational effort and is often going to give less accurate results than Gaussian elimination with partial pivoting.
Multiplying two diagonal matrices of size n X n takes O(n) operations if done right, and O(n3) operations if the full matrices are used in a MATMULT call.
Please read a book such as Golub and van Loan's Matrix Computations.There are also many good articles on this topic on the Web.
For these reasons, I think that I would be doing you a disservice by telling you how to form a diagonal matrix from a vector containing the main diagonal. Not only is that trivial to do, but doing it is a temptation that I wish to help you avoid.