Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Multi-dimensional function minimization

Vishnu
Novice
1,673 Views
I have a non-linear function in many variables, whose minimum (local) I want to find. Is there an appropriate routine in the MKL for such a scenario? I can only find linear solvers and least-square solvers. Neither of them are what I want.
0 Kudos
11 Replies
mecej4
Honored Contributor III
1,673 Views

Not in MKL, but a number of packages/libraries are available, many as Fortran source: see http://plato.asu.edu/sub/nlounres.html .

0 Kudos
Vishnu
Novice
1,673 Views
@mecej4 Thanks for that link! It helps decide what algorithm to use, depending on the properties of my problem. Any Idea how GSL's minimization routines compare to those? I ask because that seems more recent than any of those. https://www.gnu.org/software/gsl/doc/html/multimin.html
0 Kudos
mecej4
Honored Contributor III
1,673 Views

You did not say which language you are using; GSL is awkward to use from Fortran, or if a C compiler other than GCC is used under Windows. Some of the algorithms, e.g., Fletcher-Reeves and Nelder-Mead, are quite old, and putting C wrappers around such old code does not make the algorithms work better. See https://www.lrz.de/services/software/mathematik/gsl/fortran/ .

You may find something that you can use at www.netlib.org/toms. On that page, search for "unconstrained".

If you simply want to solve a minimization problem a small number of times, the dedicated solvers packaged in AMPL and GAMS are worth considering. See www.ampl.com and https://www.gams.com .

0 Kudos
Vishnu
Novice
1,673 Views
I'm using Fortran. Yes, I did look into that Fortran interface to GSL, and have installed it. I'm considering the Conjugate Gradient or the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) algorithms, because the seem popular. GSL implements both. There also seems to be something called NLOPT https://nlopt.readthedocs.io/en/latest/ I'll look through both and decide.
0 Kudos
mecej4
Honored Contributor III
1,673 Views

If the function is general-nonlinear, without bounds on the variables, and you can compute the gradient, BFGS would be my choice. Once you write the code for the gradient, make sure to test it with several input values!

0 Kudos
Vishnu
Novice
1,673 Views
Yes, it is a non-linear, scalar function of the variables, and I already have an analytical expression for the gradient, so I can use that. BFGS was my first choice too, but after reading up on it a little more, I'm now slightly doubtful. I can have upto ~ 10^5 variables, and that and other Newton-like methods seem to need to store and calculate the Jacobian and Hessian at every step, and that might not scale very well with variable number. Of course, if I use a package, I'll try it anyway.
0 Kudos
Andrey_N_Intel
Employee
1,673 Views

Hi, as another option please analyze optimization solvers available in Intel DAAL, https://software.intel.com/en-us/daal-programming-guide-optimization-solvers

 

0 Kudos
mecej4
Honored Contributor III
1,673 Views

Andrey N, thanks for the link; I did not know that an unconstrained minimization routine was available in DAAL. I had briefly looked at DAAL when it was announced, but set it aside because it seemed not to be designed to be used from Fortran or even C.

I followed the link that you posted and tried to understand how to specify the objective function and call DAAL from Fortran or C. I could not find that information, so I looked at the example code (after installing DAAL for that very purpose) The example refers to some parameters of the objective function that are read from a CSV file, but that still leaves me in the dark about the mathematical form of the objective function. In short, what is the probem that the example code is supposed to solve?

Please point me to a page where the call to the minimizer is documented in, say, the same style as the MKL ?TRNLSP, if available. Thanks.

0 Kudos
Andrey_N_Intel
Employee
1,673 Views

Hi,  the description of the solvers is available in the section "Optimization Solvers" of the Developer Guide.

For example, the paragraphs https://software.intel.com/en-us/daal-programming-guide-iterative-solver and https://software.intel.com/en-us/daal-programming-guide-computation-6 discuss the general flow of the iterative solver and its parameters. 

Paragraphs such as https://software.intel.com/en-us/daal-programming-guide-limited-memory-broyden-fletcher-goldfarb-shanno-algorithm​ and https://software.intel.com/en-us/daal-programming-guide-computation-8 discuss details relevant to the specific solver, e.g., lBFGS.

Details related to the objective functions and their characteristics are available at https://software.intel.com/en-us/daal-programming-guide-objective-function.

Please, let me know, if the documentation contains sufficient level of the required details  

 

 

0 Kudos
mecej4
Honored Contributor III
1,673 Views

What I find missing is the prototype declaration for a function (in Fortran, an interface) and a description of the parameters (in Fortran, dummy arguments to the subroutine/function). I want to know how to write the code for the function, and how to hook up the function to the solver.

I think part of the problem is the result of the object-oriented descriptions in C++ terminology not being comprehensible to a Fortran user who is procedure-oriented.

Here is an example of documentation that makes a solver accessible from more than one language:

    http://www.inutech.de/nlp/NLP-1.1.0/reference/class_sqp_wrapper.html

It would, of course, be even better if Intel provided example DAAL codes in Fortran and C, as is done for MKL currently.

0 Kudos
Andrey_N_Intel
Employee
1,673 Views

Hi,

As you correctly mentioned the library provides C++ and Java API, python API is available as well, the library does not support Fortran.

API Reference manual that provides a brief description of the library's interfaces, in addition to the Developer Guide can be downloaded at https://software.intel.com/en-us/articles/daal-api-reference. You can choose daal_cpp_api_2019_beta.zip file. The info on the parameters of the solvers is available at Classes->Class List>daal->algorithms->optimization_solver. If you choose lbfgs->interface 1->Parameter you will see that one of the parameter constructor argument is a function of type sum_of_functions::BatchPtr See also 'Batch' available at the lbfgs->interface1 

Let's have a look at lbfgs_dense_batch.cpp example: the first part of the example loads the data from csv data source to the memory, in the Numeric Table format. The second part of the example configures the parameters of the algorithm including the objective function:

// objective function
services::SharedPtr<optimization_solver::mse::Batch<> > mseObjectiveFunction(
        new optimization_solver::mse::Batch<>(data->getNumberOfRows()));
    mseObjectiveFunction->input.set(optimization_solver::mse::data, data);
    mseObjectiveFunction->input.set(optimization_solver::mse::dependentVariables, dependentVariables);
 
  // Create objects to compute LBFGS result using the default method
 // this is the place where objective function is passed into the algorithm
    optimization_solver::lbfgs::Batch<> algorithm(mseObjectiveFunction);
 
// configure other parameters
    algorithm.parameter.nIterations = nIterations;
    algorithm.parameter.stepLengthSequence =
        NumericTablePtr(new HomogenNumericTable<>(1, 1, NumericTableIface::doAllocate, stepLength));
 
// provide a dataset
    algorithm.input.set(optimization_solver::iterative_solver::inputArgument,
                        NumericTablePtr(new HomogenNumericTable<>(initialPoint, 1, nFeatures + 1)));
 

// Run solver 

    algorithm.compute();

// Extract the result with call

    algorithm.getResult()->get(optimization_solver::iterative_solver::minimum);


 

 

 

 

 

 

 

 

 

 

 

 

 

0 Kudos
Reply