Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Results of LAPACKE_dgesvd

Gharbi__Mokhtar
Beginner
623 Views

Hi MKL gurus,

I have in my code a call to LAPACKE_dgesvd function. This code is covered by autotest. Upon compiler migration we decided to upgrade MKL too from 11.3.4 to 2019.0.5.

And tests became red. After deep investigation I found that this function is not more returning the same U & V matrices.

I extracted the code and make it run in a separate env/project and same observation. the observation is the first column of U and first row of V have opposite sign

Could you please tell my what I'm doing wrong there ? or how should I use the new version to have the old results ?

 

I attached a simple project allowing to easily reproduce the issue. here is the code if you are not using visual studio

// MKL.cpp : This file contains the 'main' function. Program execution begins and ends there.
//

#include <iostream>
#include <algorithm>
#include <mkl.h>

int main()
{
    const int rows(3), cols(3);
	double covarMatrix[rows*cols] = { 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
                            -0.6088405718211041, 0.9992441421012869, -0.3357678733652218, 
                            -0.4935146797825398, -0.3357678733652218, 0.9992441421012761};
    double U[rows*rows] = { -1,-1,-1,
                    -1,-1,-1,
                    -1,-1,-1 };
	double V[cols*cols] = { -1,-1,-1,
					-1,-1,-1,
					-1,-1,-1 };
    double superb[std::min(rows, cols) - 1];
    double eigenValues[std::max(rows, cols)];

	MKL_INT info = LAPACKE_dgesvd(LAPACK_ROW_MAJOR, 'A', 'A',
		rows, cols, covarMatrix, cols, eigenValues, U, rows, V, cols, superb);

	if (info > 0)
        std::cout << "not converged!\n";

    std::cout << "U\n";
    for (int row(0); row < rows; ++row)
    {
        for (int col(0); col < rows; ++col)
            std::cout << U[row * rows + col] << " ";
        std::cout << std::endl;
    }

	std::cout << "V\n";
	for (int row(0); row < cols; ++row)
	{
		for (int col(0); col < cols; ++col)
			std::cout << V[row * rows + col] << " ";
		std::cout << std::endl;
	}
       
    std::cout << "Converged!\n";
}

Here is more numerical explanations :

A = 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
      -0.6088405718211041, 0.9992441421012869, -0.3357678733652218, 
      -0.4935146797825398, -0.3357678733652218, 0.9992441421012761

results on :

11.3.4                                                                                                                  2019.0.5 & 2020.1.216

U

-0.765774    -0.13397        0.629                                                                        0.765774   -0.13397      0.629
0.575268     -0.579935      0.576838                                                                 -0.575268   -0.579935    0.576838 
0.2875         0.803572       0.521168                                                                  -0.2875        0.803572    0.521168

V

-0.765774    0.575268     0.2875                                                                        0.765774      -0.575268     -0.2875
-0.13397     -0.579935     0.803572                                                                    -0.13397       -0.579935      0.803572
0.629           0.576838     0.521168                                                                    0.629             0.576838       0.521168  

  
I tested using scipy and the result is identical as on 11.3.4 version.

from scipy import linalg
from numpy import array

A = array([[0.9992441421012894, -0.6088405718211041, -0.4935146797825398], [-0.6088405718211041, 0.9992441421012869, -0.3357678733652218], [-0.4935146797825398, -0.3357678733652218, 0.9992441421012761]])
print(A)
u,s,vt,info = linalg.lapack.dgesvd(A)
print(u)
print(s)
print(vt)
print(info)

 

 

Thanks for your help and best regards

Mokhtar
 

0 Kudos
1 Solution
Gennady_F_Intel
Moderator
623 Views

Mokhtar, 

please see below the answer from LAPACK experts:

The singular value decomposition is not unique. For example, if we have an SVD decomposition (e.g. a set of matrices U, S, V) so that   A=U* S* V^T then the set of matrices (-U, S, -V) is also an SVD decomposition because (-U) S *(-V^T) = U*S*V^T = A. Moreover if  D is a diagonal matrix which diagonal entries are equal to -1 or 1 then the set of matrices U*D, S, V*D is also an SVD decomposition because (U*D)*S*D*V^T = U*S*V^T =A.

Since that it is not a good idea to validate the SVD decomposition by comparing two sets of matrices. The  LAPACK User’s Guide as many other publications recommend to check the following conditions  for the computed SVD decomposition:

  1.  || A *V – U*S || / || A|| should be small enough
  2. || U^T *U – I || close to zero
  3. || V^T *V – I || close to zero
  4. all diagonal entries of the diagonal S must be positive and sorted in decreasing order.  The error bounds for all expressions given above can be found on  https://www.netlib.org/lapack/lug/node97.html

So the both MKL versions mentioned in the post-return the singular values and singular vectors which satisfied all 4 error bounds. Since that and because the SVD is not unique, both results are correct. The change of sign in the first singular vectors happened because for very small matrices another faster method for the reduction to bidiagonal form started to use.

View solution in original post

0 Kudos
2 Replies
Gennady_F_Intel
Moderator
623 Views

Thanks, Mokhtar for the report. We will check the case asap.

0 Kudos
Gennady_F_Intel
Moderator
624 Views

Mokhtar, 

please see below the answer from LAPACK experts:

The singular value decomposition is not unique. For example, if we have an SVD decomposition (e.g. a set of matrices U, S, V) so that   A=U* S* V^T then the set of matrices (-U, S, -V) is also an SVD decomposition because (-U) S *(-V^T) = U*S*V^T = A. Moreover if  D is a diagonal matrix which diagonal entries are equal to -1 or 1 then the set of matrices U*D, S, V*D is also an SVD decomposition because (U*D)*S*D*V^T = U*S*V^T =A.

Since that it is not a good idea to validate the SVD decomposition by comparing two sets of matrices. The  LAPACK User’s Guide as many other publications recommend to check the following conditions  for the computed SVD decomposition:

  1.  || A *V – U*S || / || A|| should be small enough
  2. || U^T *U – I || close to zero
  3. || V^T *V – I || close to zero
  4. all diagonal entries of the diagonal S must be positive and sorted in decreasing order.  The error bounds for all expressions given above can be found on  https://www.netlib.org/lapack/lug/node97.html

So the both MKL versions mentioned in the post-return the singular values and singular vectors which satisfied all 4 error bounds. Since that and because the SVD is not unique, both results are correct. The change of sign in the first singular vectors happened because for very small matrices another faster method for the reduction to bidiagonal form started to use.

0 Kudos
Reply