- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Hello everybody,

This is my second post with a set of questions about using the nonlinear least square solver from mkl (the first one is about the OMP parallelization is here http://software.intel.com/en-us/forums/topic/495859 )

I implemented this algorithm following the example in the mkl

unfortunately the solver keeps escaping the region set by** UP and LW** http://software.intel.com/sites/products/documentation/hpc/mkl/mklman/GUID-B6BADF1C-F90C-4D30-8B84-C...

where I really don't want it to go (because my function misbehaves there)

I first thought it is the rs parameter (which I assume to be the max s in s*J, but I'm not sure from the given explanation) and in the example provided by intel it is misleadingly initialized to 0.0 while it has to be between 0.1 and 100(default). Anyway, I had it default 100, then put it to 10.0; 1.0 and 0.1, changed iter2 from 100 to 10 and to 1 (to prevent it from extrapolating too far with the first step derivatives) but it runs away again at the second step to the same numbers no matter what I do!

sending to the solver at initialization:

x0(1) -47.270320 **LW(1) -56.724384** UP(1) -37.816256

x0(2) -36.266918 LW(2) -43.520302** UP(2) -29.013534**

guess solutions sent by the solver to my function (from the function)

thread 0

x0(1) -47.270320 x_step2(1) **-70.905480**

x0(2) -36.266918 x_step2(2) -36.266918

thread 1

x0(1) -47.270320 x_step2(1) -47.270320

x0(2) -36.266918 x_step2(2) **-18.133459**

This is what I am sending to the solver at initialization (which reports success)

initialize solver (allocate mamory, set initial values)

n1 in: number of refined parameters 15

m1 in: 1D function value F 16800

iter1 in: maximum number of iterations 100

iter2 in: maximum number of trial-steps 1

rs in: initial step bound 0.100000

SUCCESS

Did anybody have the same troubles? Anybody can give me any pointers?

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Many algorithms for nonlinear constrained optimization do not restrict requests for function and constraint evaluations to points only within the feasible region. In fact, the initial step in some algorithms is to find a feasible point, from whence a path can be followed along which the objective function decreases.

Search the Web for FSQP and CFSQP -- these were software packages which searched for an optimum while staying within the feasible region.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page