I want to compute the SVD of two fairly small square matrices (of size 32x32 or 16x16). I use the following code to compute SVD:
int iRows = m1; //=32
int iCols = m2; //=16
char JOBU = 'A';
char JOBVT = 'A';
float* A_Z1 = new float[iRows*iRows];
float* U_Z1 = new float[iRows*iRows];
float* VT_Z1 = new float[iRows*iRows];
float* S_Z1 = new float[iRows];
float* A_Z2 = new float[iCols*iCols];
float* U_Z2 = new float[iCols*iCols];
float* VT_Z2 = new float[iCols*iCols];
float* S_Z2 = new float[iCols];
int LWORK_Z1 = max(1,5*iRows);
float* WORK_Z1 = new float[LWORK_Z1];
int LWORK_Z2 = max(1,5*iCols);
float* WORK_Z2 = new float[LWORK_Z2];
//Compute SVD of vZ1a................................................
for(int i = 0; i < iRows; ++i)
for(int j = 0; j < iRows; ++j)
A_Z1[j+i*iRows] = vZ1a(j,i);
sgesvd(&JOBU, &JOBVT, &iRows, &iRows, A_Z1, &iRows, S_Z1, U_Z1, &iRows, VT_Z1, &iRows, WORK_Z1, &LWORK_Z1, &INFO);
//Compute SVD of vZ2a................................................
for(uint32_t i = 0; i < iCols; ++i)
for(uint32_t j = 0; j < iCols; ++j)
A_Z2[j+i*iCols] = vZ2a(j,i);
sgesvd(&JOBU, &JOBVT, &iCols, &iCols, A_Z2, &iCols, S_Z2, U_Z2, &iCols, VT_Z2, &iCols, WORK_Z2, &LWORK_Z2, &INFO);
Later in my application I use left and right singular vectors for a machine learning algorithm and I have algorithms for measuring the accuracy of results.
I linked the same program with ATLAS+CLAPACK and MKL.
Surprisingly, I get very inaccurate results using MKL (i.e. PSNR using MKL is 53 and using CLAPACK is 73!).
Note that I am using the same code for both of them. All I do is to link to different libraries.
Whats wrong here? I don't see a difference between lapack function declarations (obviously there shouldn't be any). So my function call should be correct. Also I take care of the fact that I should pass the transpose of my input matrix since sgesvd needs column-major ordering
I compiled with both gcc and intel compiler but same results. I use the sequential libraries just to be safe.
I hope this is one of those stupid mistakes that I make because it has been bugging me for quite some time :)
The algorithm iterates and computes SVD several times. The results of SVD are used on next iterations so the error is accumulating. Unfortunately the code and the data is big and there are dependencies to Eigen 3 and boost libraries.
If you are willing to take the time and compile it I can provide the code.
I am using the latest MKL 11 included in Cluster Studio.
Rather than providing your full code, take the code fragments that you posted in the first post of this thread, and flesh them out into a complete program. All that you have to do is add declarations, etc., to the code, provide a data file containing the matrix data, and provide code to read the data file, and sample correct/incorrect results.
Please check your post to make sure that it is complete. For instance, you refer to the value of PSNR, but we have no idea as to what PSNR is, since the code does not contain that as a variable name.
Sorry for replying late. It took some time to prepare a small app for you to test.
I attached the test application.
A few considerations:
The only dependency is Eigen 3 library. Download the latest from http://eigen.tuxfamily.org/index.php?title=Main_Page
You don't need to compile Eigen. It's a templated library. You just need to include it and it is done in my makefile.
The makefile points to /usr/local/include/eigen3.
Run the test program without any arguments: $./compressor
You should see a output like: "Recons. error is: 53.22216"
On my system it takes less than a minute to run (Xeon 8 cores with HT).
Now to compare MKL and CLAPACK:
Open the makefile. Find these two lines:
SLF_SVD_CLAPACK = 0
SLF_SVD_MKL = 1
You just need to change these two lines. By default you will link to MKL (as above). Also change "LAPA_INCLUDES" and "LAPA_LIBDIR" to point to the CLAPACK library you have on your system.
The SVD is computed in function Train(...) where I call ComputeSVD() and ComputeDoubleSVD()
ComputeDoubleSVD() computes the SVD of a matrix and returns the product of left and right singular vectors.
by the way, PSNR is Peak Signal to Noise Ratio. The higher the better.
I get ~73 PSNR for CLAPACK and 53 for MKL.
The difference is huge. Note that PSNR is computed using logarithm. http://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Please report the numbers you get or any problems you find with my code. It is very unlikely that Intel sacrifices so much accuracy for performance. I hope I am doing something wrong.
Thanks for providing the reproducer. It does look like an issue in MKL. This is now escalated to the MKL development team. I'll update here whenenver there is progress made. Thanks!
Hi, Ehsan M.!
First of all why do you think 53 is wrong answer? Following http://www.ijesit.com/Volume%202/Issue%201/IJESIT201301_08.pdf 53 is a good result.
Then please could you tell which definition of PSNR do you use? Folowing the paper linked above ( see (10) ) we should set to zero coeffictents below certain epsilon. And I could not found setting to zero of such values in your code.
There is no right or wrong about PSNR and the values are meaningful only when comparing. I am not working with images. The paper you posted is irrelevant as my input data is not an image. The application is computer graphics related and my data is at least 4D. The "Max" value in http://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio is set to max intensity of light sources in a scene and when this value is changed the PSNR will be changed too.
The bug in MKL's sgesvd is fixed now.