Intel® oneAPI Data Analytics Library
Learn from community members on how to build compute-intensive applications that run efficiently on Intel® architecture.

Constrained optimization with DAAL

Gusev__Dmitry
Beginner
11,019 Views

Hello all,

Can someone help me to answer  my question: if  DAAL  is good for convex constrained optimization? 

As stated in the article ( https://software.intel.com/en-us/daal-programming-guide-objective-function), the proximal operator there could  be used for non-smooth part of objective function, and the example (https://software.intel.com/en-us/daal-programming-guide-logistic-loss) shows this for L1 regularization. On the other hand,  if non-smooth  part M(theta)  is just an indicator (characteristic)  function of some convex set (constraints) , the proximal operator is exactly projection operator.

Is it possible to pass this projection operator to objective function object to handle convex constraints in that way?

Thanks! Your help is much appreciated,

Dmitry.

0 Kudos
1 Solution
Kirill_S_Intel
Employee
11,140 Views

Hello Dmitry,

Please see some comments regarding  #25: you should modify your custom_proj.h(from #23) with only one line of code sumOfFunctionsParameter = ∥ in initialize() method.

If you delete field "Parameter par;" in Batch class you have to set _par and sumOfFunctionsParameter pointers to new allocated memory for  logistic_prox_function::Parameter.

sumOfFunctionsParameter = new logistic_prox_function::Parameter(numberOfTerms); _par = sumOfFunctionsParameter;

#26 

1) So inheritance from logistic loss::Batch was advised as simplest way to re-use kernel of logistic loss function. But you can follow existed example(optimization_solvers/custom_obj_func.h) to create your own objective function and implement compute with alignment to logistic loss function without inheritance (it would be harder to use the same internal functions like xgemv).

2) Modification of saga optimization solver flow is not recommended. DAAL optimization solver use existed sum_of_function api. Lipschitz constant is computed by result to compute already. And change of argument and reversion it back is performed on solver side and you have to take into account this aspect for implementation your proximal projection (all components of argument were divided by 'stepLength' and then proximal projection computation is called for this modified argument).

 

Best regards,

Kirill

 

View solution in original post

0 Kudos
31 Replies
Kirill_S_Intel
Employee
2,128 Views

Hi Dmitry.

Most probable reason of this linkage error is linkage with reduced built library (with one cpu). LogLossKernel<float,0,avx2=4>.

Another reason is that on Windows operational system for linkage __declspec( dllexport ) should be used but as LogLossKernel is internal class it can be unavailable. Provided examples definitely works for Linux. Seems your variant is also applicable.

Best regards,

Kirill

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Hi Kirill,

Yes, it is a linkage specific on Windows system.

Now, after some successful testing with trivial constraints I would like to handle more realistic constraints. In order to do that I need to to pass some new parameters (matrices, vectors, etc). I have tried to extend the logistic_loss::Parameter structure

struct Parameter : public daal::algorithms::optimization_solver::logistic_loss::Parameter

to add just a single new field (float test).  It caused run-time error. I have attached my implementation. 

The modifications are marked with the comments  //CHANGED or //ADDED

Can you please advice,  what is wrong with this implementation?

Thank you,

Dmitry.

 

 

0 Kudos
Kirill_S_Intel
Employee
2,128 Views

Hello Dmitry,

There is no need to add new field "Parameter par;" in Batch class definitely (you can use similar approach with parameter() as in logistic_loss::Batch via allocation memory for Parameter and set it to sumOfFunctionsParameter). With your approach you should initialize all base class fields: as I see sumOfFunctionsParameter is initialized by logistic_loss::Parameter currently.

sumOfFunctionsParameter = &par;

 

Best regards,

Kirill
    

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Hi Kirill,

Thank you for your prompt reply. 

I did not quite understand how to allocate memory for the Parameter.  Simple mimic of logistic_loss::Batch does not work - new Parameter field (float test) is not passing (always zero)

My imitation of  logistic_loss::Batch is below. How should I modify it?

 Thanks again for your help.

template<typename algorithmFPType = DAAL_ALGORITHM_FP_TYPE, logistic_loss::Method method = logistic_loss::defaultDense>
    class Batch : public logistic_loss::Batch<algorithmFPType, method>
    {
    public:
   
        typedef daal::algorithms::optimization_solver::logistic_loss::Batch<algorithmFPType, method> super;
        

        Parameter& parameter() { return *static_cast<Parameter*>(_par); }   

        const Parameter& parameter() const { return *static_cast<const Parameter*>(_par); }  

        
        Batch(size_t numberOfTerms): super(numberOfTerms)
        {
            initialize();
        }

        virtual ~Batch() {}

        Batch(const Batch<algorithmFPType, method>& other): super(other)
        {
            initialize();
        }

        daal::services::SharedPtr<Batch<algorithmFPType, method> > clone() const
        {
            return services::SharedPtr<Batch<algorithmFPType, method> >(cloneImpl());
        }

        daal::services::Status allocate()
        {
            return allocateResult();
        }

        static daal::services::SharedPtr<Batch<algorithmFPType, method> > create(size_t numberOfTerms) { return           logistic_loss::Batch::create(numberOfTerms) };

    protected:

        virtual daal::services::Status allocateResult() DAAL_C11_OVERRIDE
        {

            daal::services::Status s = _result->allocate<algorithmFPType>(&input, _par, (int)method);
            _res = _result.get();
            return s;
        }


        virtual Batch<algorithmFPType, method>* cloneImpl() const DAAL_C11_OVERRIDE
        {
            return new Batch<algorithmFPType, method>(*this);
        }

        void initialize()
        {
            daal::algorithms::Analysis<daal::batch>::_ac = new BatchContainer<algorithmFPType, method, daal::CpuType::avx2>(&(this->_env));
            _par = sumOfFunctionsParameter;
            
        }
    };

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Hi Kirill,

I made some attempts to use inherited Parameter class (with the new field float test). So far it is unsuccessful - the saga compute method calls the objective function with test =0.

However, adding this new field to the existent logistic_loss::Parameter class works as expected (I have changed logistic_loss_types.cpp and  logistic_loss_types.h). 

My questions are: 

1) Does it sound optimal to inherit  the  logistic_loss::Parameter or is it better to write a new class similar to  the logistic_loss::Parameter with this new field and possibly to write a new function similar to logistic_loss (logistic_loss_dense_default_batch_impl.i)?

2) Since  saga solver changes argument to handle step size in proximal projection I have to compute Lipschitz constant to revert it back in my custom projection computation. What is the optimal way to do that?  I am thinking about calculation Lipschitz constant after creation of the objective function object and passing it as a new parameter field. Does it sound reasonable to you?

Thank a lot for your help!

Regards,

Dmitry. 

0 Kudos
Kirill_S_Intel
Employee
11,141 Views

Hello Dmitry,

Please see some comments regarding  #25: you should modify your custom_proj.h(from #23) with only one line of code sumOfFunctionsParameter = &par; in initialize() method.

If you delete field "Parameter par;" in Batch class you have to set _par and sumOfFunctionsParameter pointers to new allocated memory for  logistic_prox_function::Parameter.

sumOfFunctionsParameter = new logistic_prox_function::Parameter(numberOfTerms); _par = sumOfFunctionsParameter;

#26 

1) So inheritance from logistic loss::Batch was advised as simplest way to re-use kernel of logistic loss function. But you can follow existed example(optimization_solvers/custom_obj_func.h) to create your own objective function and implement compute with alignment to logistic loss function without inheritance (it would be harder to use the same internal functions like xgemv).

2) Modification of saga optimization solver flow is not recommended. DAAL optimization solver use existed sum_of_function api. Lipschitz constant is computed by result to compute already. And change of argument and reversion it back is performed on solver side and you have to take into account this aspect for implementation your proximal projection (all components of argument were divided by 'stepLength' and then proximal projection computation is called for this modified argument).

 

Best regards,

Kirill

 

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Thank you Kirill, 

This extra line was not trivial for me. it works now! 

I have a question about  Lipschitz  constant, which I use in my projection code to revert back the argument. To access the constant from the projection code I have to compute the objective function, requesting Lipschitz  constant as result.

What is the optimal way to do that?  I think I could compute it right after objective function creation (in cpp file) and pass it as parameter, another way would be to clone the function in projection code and request computation of  Lipschitz  constant. I feel there would be a better way to get the constant.

Can you please advice.

Best regards,

Dmitry

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Hi Kirill,

I am writing to you again as I got some more questions for you. But before I ask just wanted to let you know that the constrained optimization with SAGA algorithm and the custom projection operator works as expected - thanks a lot for your help.

I upgraded DAAL to 2020 version.

Here are my questions:

  1. Does the SAGA algorithm in DAAL 2020 version take into account only batchSize = 1?  Does it make sense to specify any other size of minibatch for SAGA?
  2. Are Adaptive Subgradient Method and Coordinate Descent algorithm implementations capable  to handle non-smooth component of objective function via proximal projection?
  3. I would like to extend my constrained optimization problem to deal with weighted logistic loss objective function. To do so, I am planning to re-implement the logistic loss value, gradient and possibly lipschitz constant. Do you think there is a better way to to deal with weighted logistic loss objective function in DAAL? Any out of box solutions?
  4. I think SAGA algorithm implementation does not require the Hessian computation. Is it a correct statement?

Thanks again for your help.

Best regards,

Dmitry

 

0 Kudos
Kirill_S_Intel
Employee
2,128 Views

Hello, Dmitry

1. Does the SAGA algorithm in DAAL 2020 version take into account only batchSize = 1?  Does it make sense to specify any other size of minibatch for SAGA?

Currently only batchSize=1 is used for SAGA computation. We found some way to support case when batchSize > 1, but in corresponding article there is no theoretical justification for it. 

2. Are Adaptive Subgradient Method and Coordinate Descent algorithm implementations capable  to handle non-smooth component of objective function via proximal projection?

Only SAGA and Coordinate Descent supports non-smooth parts (Coordinate descent work well with MSE only now, some extensions are needed for logistic loss and cross entropy: componentOfGradient ...). Adagrad, SGD, LBFGS work only with smooth parts of gradients.

3. I would like to extend my constrained optimization problem to deal with weighted logistic loss objective function. To do so, I am planning to re-implement the logistic loss value, gradient and possibly lipschitz constant. Do you think there is a better way to to deal with weighted logistic loss objective function in DAAL? Any out of box solutions?
As far as I know we have a plan to support weights for all classifiers, so this extension should cover your problem. Now you can work with your custom logistic loss function, it`s primal way while weights is not a part of DAAL logistic loss.

4. I think SAGA algorithm implementation does not require the Hessian computation. Is it a correct statement?
Yes. SAGA solver does not require the Hessian computation.

Best regards,

Kirill

0 Kudos
Gusev__Dmitry
Beginner
2,128 Views

Hi Kirill,

Thanks for the info. One more technical question:

What would be a better way (performance, memory, safety) to access elements of a numerical table?

  • Via BlockDescriptor

                     BlockDescriptor<algorithmFPType> yBlock;
                     yTable->getBlockOfRows(0, nTerms, writeOnly, yBlock);
                     algorithmFPType *yArray = yBlock.getBlockPtr();

                      //.........

                     yTable->releaseBlockOfRows(yBlock);

  • Via a cast to HomogenNumericTable

                HomogenNumericTable<algorithmFPType>* yHm = dynamic_cast<HomogenNumericTable<algorithmFPType>*>(yTable);
                algorithmFPType* yArray = yHm->getArray();

  • When it would be appropriate to use apparently not efficient call getValue<DAAL_ALGORITHM_FP_TYPE>(j,i) to access an element of a numerical table? 

Best regards,

Dmitry

0 Kudos
Kirill_S_Intel
Employee
2,128 Views

Hello, Dmitry

Recommended way is using of block descriptors and call 'getBlockOwRows' or 'getBlockOfColumns'.

The approach with dynamic cast requires addition check on 'nullptr'.

And in some cases we can create numeric table with memory allocated on our side.
As example method for Homogen NT: services::SharedPtr<HomogenNumericTable<DataType> > create(DataType * const ptr, size_t nColumns = 0, size_t nRows = 0, services::Status * stat = NULL). [include/data_management/data/homogen_numeric_table.h:132]

Than you can access to your data via row pointer. It`s often applicable when we want to get result of algorithm with minimum overhead, we can set user allocated memory.

 

Best regards,

Kirill

0 Kudos
Reply