Community
cancel
Showing results for
Did you mean: Beginner
445 Views

## Help Using Lapack for Getting Least Squares Fit Polynomial Coefficients

Hello all,

I am a first time MKL user trying to use the library to fit a 3rd-order 2-d polynomial function to f(x,y). This algorithm works using pretty much the same exact approach in Python so I believe it to be conceptually sound.

I'm trying to use LAPACKE_dgelsy but my program dies whenever it's called, and I'm sure my function arguments are incorrect.

I created my A matrix to be  10 x 60,000, with each of the 10 rows representing a coefficient for {1,x,y,x2,xy,y2,etc..} at each index of data, and B = f(x,y) (B is size 1 x 60,000). dgelsy only takes a single pointer, so I've had "a" take the form of the array:

`double * a = new double[n * 10];`

and I just concatenated each row of data one after the other into this array. I'm under the impression that dgelsy's argument values "LAPACK_ROW_MAJOR","m", "n", and "nrhs" will order the data in a more typical A*x=B, m x n, matrix form for all the calculations (at least conceptually, not in actual memory). Do I have the dimensions and input form correct in this?

The rest of the code looks like this so far:

```lapack_int matrixLayout = LAPACK_ROW_MAJOR;
lapack_int n = 60000;
lapack_int lda = n;
lapack_int ldb = n;
m = 10;
lapack_int nrhs = n;
int *jpvt = new int;//No idea what this is
lapack_int rcond = -1; //This either
int *rank = new int; // This is supposed to be an output variable, how should I initialize it?
int LapackResult = LAPACKE_dgelsy(matrixLayout, m, n, nrhs, a, lda, b, ldb, jpvt, rcond, rank);
```

It crashes the program at the last line obviously. I've looked at all the documentation and I can't figure out what it wants for the arguments: jpvt, rcond, and rank. Could anyone give me more info about what these mean?

Finally, if WHEN I do get dgelsy to work, how can I interpret the output (i.e. rank pointer) to find the coefficients of my fitted polynomial?

Sorry if I'm coming at this completely wrong. I've gotten the IPP library to work very well for me so I've been taking some of the things I learned there and trying to apply them here to no avail.

4 Replies Black Belt
445 Views

I gather that you have 60000 "observations" to which you wish to fit a 3rd order polynomial in x and y, i.e., m = 60000 and n=1+2+3+4=10 model parameters. The regressor matrix A should be of size m X n, with m > n, for the overdetermined least squares problem.

1. You have the row and column sizes interchanged, even at the conceptual level.

2. GELSY is suited for obtaining the minumum norm x for the underdetermined problem. You should consider using GELS instead, which is intended for minimizing the norm of the error A.x - b.

3. You are using allocated arrays as arguments where a pointer to a scalar is all that is required. While this is not necessarily harmful, it indicates uncertainty about the interface and the possibility of errors in the argument list.

4. There is a pair of examples of using lapacke_dgels() in the MKL examples. Start with them and adapt them for your problem. Beginner
445 Views

Thanks a bunch for the response!

You're right about point 1, I'm changing it now so that m=60,000 and n = 10.

I gather that you have 60000 "observations" to which you wish to fit a 3rd order polynomial in x and y, i.e., m = 60000 and n=1+2+3+4=10 model parameters.

Yes, I have 60,000 data points (f(x,y)) at locations (x,y). Each component of these observations is split into their own array of length 60,000. My matrix A is (now) size m x n.

n=10 comes from the desired fitting equation's coefficients:

f(x,y) = a1*1 + a2*x + a3*y + a4*x2 + a5*xy + a6*y2 + a7*x3 + a8*x2y + a9*xy2 + a10*y3

2. I will look into it, thanks.

3. I am aware that those arguments are incorrect. Could you point me to where I could find how to get those scalar values? Probably in the examples, I assume.

4. I will take a look at the example for GELS, I don't have any examples for GELSY so maybe I will have more luck with this. Black Belt
445 Views

Here is code using LAPACKE_dgels(). I faked the "data" by calculating the values of f(x,y) over a rectangle from an assumed polynomial, and multiplying the resulting values by (1 + a small random perturbation). The program fits the polynomial and prints out the original and fitted values of the polynomial coefficients.

```#include <stdlib.h>
#include <stdio.h>
#include "mkl_lapacke.h"

#define NX 100
#define NY 100
#define N 10
#define NRHS 1
#define LDA N
#define LDB NRHS

int main() {
MKL_INT m = NX*NY, n = N, nrhs = NRHS, lda = LDA, ldb = LDB, info;
int i,j,k;
double *A,*b,x,y;
double c[]={1.5, 2.1,-2.7, -3.1,4.5,-6.7, 0.8,-0.7,4.1,-3.7};
A=(double *)malloc(m*n*sizeof(double));
b=(double *)malloc(m*sizeof(double));
k=0;
for(i=0; i<NX; i++){
x=(i+1)*0.01;
for(j=0; j<NY; j++){
y=(j+1)*0.02;
A[k*10]=1;
A[k*10+1]=x; A[k*10+2]=y;
A[k*10+3]=x*x; A[k*10+4]=x*y; A[k*10+5]=y*y;
A[k*10+6]=x*x*x; A[k*10+7]=x*x*y; A[k*10+8]=x*y*y; A[k*10+9]=y*y*y;
b=(c+
c*x     + c*y     +
c*x*x   + c*x*y   + c*y*y   +
c*x*x*x + c*x*x*y + c*x*y*y + c*y*y*y
) * (0.9995+0.001*rand()/(double)RAND_MAX);  // random noise added
k++;
}
}
printf( "Fitting cubic polynomial in x,y using Lapacke_Dgels\n" );
/* Solve the overdetermined equations A*X = B */
info = LAPACKE_dgels( LAPACK_ROW_MAJOR, 'N', m, n, nrhs, A, lda,
b, ldb );
/* Check for errors */
if( info > 0 ) {
printf( "Error return from DGELS, info = %d\n",info );
exit( 1 );
}
/* Print least squares solution */
printf( "\nLeast squares solution\n i  Original Fitted\n");
for(i=0; i< n; i++)printf("%2d %8.5f %8.5f\n",i,c,b);
exit( 0 );
}
```

I have slighted the requirements of code efficiency, favoring clarity instead. Beginner
445 Views

Got it working using dgels and following the LAPACKE_dgels example as well as yours, thanks for the help! 