Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

an accuracy problem between MKL and MATLAB

shaopeng_z_
Beginner
755 Views

Dear all,

rencently,I have run a program  using Intel MKL sparse format and MATLAB respectively.I find an accuracy problem in my result. Let me briefly summarize what I did.

At first,I use the MKL libary to compute K*X=KX ,as follows:

the full format symmetric matrix K converts to the sparse format CSC (COPPTR,ROWIND,VALUES or ROWIND,VALUES,pointer_B,pointer_E)

program exe101
implicit none
integer::i,j
integer,parameter::DP=8
integer::nRows,nNonzero
integer,allocatable::COPPTR(:),ROWIND(:),pointer_B(:),pointer_E(:)
real(kind=DP),allocatable::VALUES(:)
real(kind=DP),allocatable::X(:),KX(:)
real(kind=DP)::alpha,beta
character(6):: matdescra

open(unit=10,file='nRows.txt')
read(10,*)nRows
close(10,status="keep")

open(unit=10,file='nNonzero.txt')
read(10,*)nNonzero
close(10,status="keep")


allocate(COPPTR(nRows+1))
open(unit=10,file='COPPTR.txt')
read(10,*)COPPTR
close(10,status="keep")

allocate(ROWIND(nNonzero))
open(unit=10,file='ROWIND.txt')
read(10,*)ROWIND
close(10,status="keep")

allocate(VALUES(nNonzero))
open(unit=10,file='VALUES.txt')
read(10,*)VALUES
close(10,status="keep")

allocate(X(nRows))
allocate(KX(nRows))
X=1.0

alpha=1.0
beta=0.0
matdescra='SLNF'

allocate(pointer_B(nRows))
allocate(pointer_E(nRows))

do i=1,nRows
pointer_B(i)=COPPTR(i)
pointer_E(i)=COPPTR(i+1)
end do

call mkl_dcscmv('N', nRows, nRows, alpha, matdescra, VALUES, ROWIND, pointer_B, pointer_E,X, beta, KX)

open(unit=10,file='KX.txt')
write(10,'(1X,1f40.21)')KX
end

output::

KX=

1999999999.999993562698364257812
-0.000006198883056640625
2000000000.000000000000000000000
0.000000000000000000000
0.000000000000000000000
0.000000000000000000000
0.000002384185791015625
0.000001907348632812500

Secondly,I use MATLAB to compute K*X=KX

output::

1999999999.99999
-6.19888305664063e-06
2000000000.00000
0
0
0
2.14576721191406e-06
1.90734863281250e-06

where

input::
K=

5.414213562373090e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 -2.000000000000000e+009 0.000000000000000e+000 -7.071067811865480e+008 -7.071067811865480e+008

0.000000000000000e+000 3.414213562373090e+009 0.000000000000000e+000 -2.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 -7.071067811865480e+008 -7.071067811865480e+008


0.000000000000000e+000 0.000000000000000e+000 4.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 -2.000000000000000e+009 0.000000000000000e+000

0.000000000000000e+000 -2.000000000000000e+009 0.000000000000000e+000 2.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000

-2.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 2.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000

0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 2.000000000000000e+009 0.000000000000000e+000 -2.000000000000000e+009

-7.071067811865480e+008 -7.071067811865480e+008 -2.000000000000000e+009 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 2.707106781186550e+009 7.071067811865480e+008

-7.071067811865480e+008 -7.071067811865480e+008 0.000000000000000e+000 0.000000000000000e+000 0.000000000000000e+000 -2.000000000000000e+009 7.071067811865480e+008 2.707106781186550e+009


X=
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0

I want to know why  two results are different  and how  to improve the accuracy of sparse(dense) matrix- vector.

Thank you,

0 Kudos
5 Replies
mecej4
Honored Contributor III
755 Views
Such small discrepancies are to be expected, given that the result has some elements of the order of 2 billion, and some elements of the order of a millionth. That ratio of the two is about the same as that of 1.0 to machine epsilon in double precision, and we know well that the subtractive cancellation (that happens when the matrix-vector product is computed) leads to precision loss.
0 Kudos
jimdempseyatthecove
Honored Contributor III
755 Views
mecej4, why then does the 7th entry vary by a much larger factor? (was this a type-o by the OP?) Jim Dempsey
0 Kudos
mecej4
Honored Contributor III
755 Views
Jim, The coefficients of the matrix are all of the form (a + b/sqrt(2)) X 109, where a and b are small integers (plus or minus 0, 1, 2, 4). The M-V product should give 2 X 109 in rows 1 and 3, and zero in the rest of the rows. Relative to the non-zero coefficients, a sum such as 2 X 10-6 is about the same as 1 is relative to machine-epsilon. The non-zero values that the OP obtained will be sensitive to the order of evaluation. Compared to the exact value (zero), the values given by MKL and Matlab are equally bad (or good, depending on the point of view). The values in the file values.txt are not all given to the full number of digits that double precision permits. Correcting them caused the output KX(4:8) to become exactly zero, leaving KX(2) as the only number that is small instead of being identically zero.
0 Kudos
John_Campbell
New Contributor II
755 Views
Given that: kx(i) = K(i,:) * x(:) . You could define vectors kx_16 and x_16 as real*16 and accumulate kx_16, for all available values of K(i,j). You would not need to store K, but merely obtain all non-zero values and accumulate their effect. You could then compare the error of kx_matlab and kx_mkl from kx_16 and report the sum of the absolute diference. . This would show which one is worse. As you would probably calculate this in a Fortran environment, you would need to be careful to extract kx_matlab to the precision available. You need to be careful to identify possible sources of error in the comparison and when obtaining values of K. The results might be an interesting study. John
0 Kudos
Bernard
Valued Contributor I
755 Views
I suppose that catastrophic cancellation could be blamed for slightly inaccurate results.It mainly occurres when two very close values in precision(nearly equal) are subtracted or added with different signs. Moreover the exact implementation of an algorithms as beign used by Matlab and MKL can also be different thus leading to miniscule variation in the final result.
0 Kudos
Reply