topic Do you use the same computing in IntelĀ® oneAPI Math Kernel Library
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/projection-operation-in-MKL/m-p/1013375#M19331
<P>Do you use the same computing sequence in MATLAB and MKL?</P>
<P>Can you attach the data that you feed to daxpy, namely the arrays tfBlock2 and tfBlock3, as well as the value of sz_f, P, incx, incy, etc.?</P>Tue, 13 May 2014 18:46:06 GMTZhang_Z_Intel2014-05-13T18:46:06Zprojection operation in MKL
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/projection-operation-in-MKL/m-p/1013374#M19330
<P>Dear all,</P>
<P>For a projection operation, I should implement</P>
<P>F = ( I - Mf*Vfr*Vfr^T ) * F</P>
<P>where</P>
<P>Mf is a sparse matrix of size sz_fXsz_f</P>
<P>Vfr is a vector of sz_fX1</P>
<P>F is a block of vectors of size sz_fXP, and stored as tfBlock2 in the code. tfBlock3 is also used to store intermediate results.</P>
<P>How I implemented this is to, in some steps, namely,</P>
<P>1.) compute Vfr^T*F by using dgemm and store the result in r_dummy2, direct code<BR />
</P>
<PRE class="brush:fortran;">transa = 't'
transb = 'n'
a = 1.0_dbl
b = 0.0_dbl
r_dummy2 = 0.0_dbl
call dgemm( transa, transb, 1, P, &
sz_f, a, vfr_r, &
sz_f, tfBlock2, sz_f, b, r_dummy2, sz_f)</PRE>
<P>2.) from previous operations I stored the matrix vector multiplication, Mf*Vfr so I am re-using that here</P>
<PRE class="brush:fortran;">transa = 'n'
call dgemm( transa,transb, sz_f, P, 1,&
a, Mfvfr_r,&
sz_f, r_dummy2, P, b, tfBlock3, sz_f)</PRE>
<P>3.) do the block subtraction with a simple loop( I can also use blas 3 level routines with a identity matrix of PXP for this I guess.)</P>
<PRE class="brush:fortran;">v_scalar = -1.0_dbl
do k=1,P
call daxpy(sz_f,v_scalar,tfBlock3(:,k),&
incx,tfBlock2(:,k),incy)
end do</PRE>
<P>However I ran into some problems, I checked the code and up to this point the results are more or less the same with MATLAB, for instance before performing this step, the norms of the columns of F(as given above) are almost the same, the relative error between MATLAB and MKL is on the order to 1e-13.</P>
<P>However, after this projection the norms start to differ. A sample output is given below for the norms of the columns of the resulting matrix block F,</P>
<P>MATLAB, for a block size of 3</P>
<P>col1 9.9209570763674840e-05<BR />
col2 6.5649761997653081e-05<BR />
col3 4.2422350151641556e-05</P>
<P>Fortran code with MKL with the same block size</P>
<P>col1 9.9300288425328761E-05<BR />
col2 6.6337064666627222E-05<BR />
col3 4.2422353749207066E-05</P>
<P>with the least error on the last vector and %1 error on the second vector norm. Can someone shed some light on the above points and provide me with pointers on what could be going wrong? Any help is appreciated greatly.</P>
<P>My compiler options are given as</P>
<P>FC = ifort -ipo -unroll -fp-model strict -fp-stack-check -check all -fpe0 \<BR />
-traceback -ftrapuv -O2 -prec-div -prec-sqrt -debug all -fp-speculation=off</P>
<P>Best regards,</P>
<P>Umut</P>
<P> </P>
<P> </P>
<P> </P>
<P> </P>
<P> </P>Tue, 13 May 2014 17:04:10 GMThttps://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/projection-operation-in-MKL/m-p/1013374#M19330utab2014-05-13T17:04:10ZDo you use the same computing
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/projection-operation-in-MKL/m-p/1013375#M19331
<P>Do you use the same computing sequence in MATLAB and MKL?</P>
<P>Can you attach the data that you feed to daxpy, namely the arrays tfBlock2 and tfBlock3, as well as the value of sz_f, P, incx, incy, etc.?</P>Tue, 13 May 2014 18:46:06 GMThttps://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/projection-operation-in-MKL/m-p/1013375#M19331Zhang_Z_Intel2014-05-13T18:46:06Z