To solve an optimization problem with bound constraints. More...

#include <OptSolver.h>

Public Types

enum  OptMethod {
  GRADIENT = 0 ,
  TRUNCATED_NEWTON = 1 ,
  SIMULATED_ANNEALING = 2 ,
  NELDER_MEAD = 3 ,
  NEWTON = 4
}
 Choose optimization algorithm. More...
 

Public Member Functions

 OptSolver ()
 Default constructor.
 
 OptSolver (Vect< real_t > &x)
 Constructor using vector of optimization variables.
 
 OptSolver (MyOpt &opt, Vect< real_t > &x)
 Constructor using vector of optimization variables.
 
 ~OptSolver ()
 Destructor.
 
void set (Vect< real_t > &x)
 Set Solution vector.
 
int getNbFctEval () const
 Return the total number of function evaluations.
 
void setOptMethod (OptMethod m)
 Choose optimization method.
 
void setBC (const Vect< real_t > &bc)
 Prescribe boundary conditions as constraints.
 
void setObjective (string exp)
 Define the objective function to minimize by an algebraic expression.
 
void setGradient (string exp, int i=1)
 Define a component of the gradient of the objective function to minimize by an algebraic expression.
 
void setHessian (string exp, int i=1, int j=1)
 Define an entry of the Hessian matrix.
 
void setIneqConstraint (string exp, real_t penal=1./OFELI_TOLERANCE)
 Impose an inequatity constraint by a penalty method.
 
void setEqConstraint (string exp, real_t penal=1./OFELI_TOLERANCE)
 Impose an equatity constraint by a penalty method.
 
void setObjective (function< real_t(real_t)> f)
 Define the objective function by a user defined one-variable function.
 
void setObjective (function< real_t(Vect< real_t >)> f)
 Define the objective function by a user defined multi-variable function.
 
void setGradient (function< real_t(real_t)> f)
 Define the derivative of the objective function by a user defined function.
 
void setGradient (function< Vect< real_t >(Vect< real_t >)> f)
 Define the gradient of the objective function by a user defined function.
 
void setOptClass (MyOpt &opt)
 Choose user defined optimization class.
 
void setLowerBound (size_t i, real_t lb)
 Define lower bound for a particular optimization variable.
 
void setUpperBound (size_t i, real_t ub)
 Define upper bound for a particular optimization variable.
 
void setEqBound (size_t i, real_t b)
 Define value to impose to a particular optimization variable.
 
void setUpperBound (real_t ub)
 Define upper bound for optimization variable.
 
void setUpperBounds (Vect< real_t > &ub)
 Define upper bounds for optimization variables.
 
void setLowerBound (real_t lb)
 Define lower bound for optimization variable.
 
void setLowerBounds (Vect< real_t > &lb)
 Define lower bounds for optimization variables.
 
void setSAOpt (real_t rt, int ns, int nt, int &neps, int maxevl, real_t t, Vect< real_t > &vm, Vect< real_t > &xopt, real_t &fopt)
 Set Simulated annealing options.
 
void setTolerance (real_t toler)
 Set error tolerance.
 
void setMaxIterations (int n)
 Set maximal number of iterations.
 
int getNbObjEval () const
 Return number of objective function evaluations.
 
real_t getTemperature () const
 Return the final temperature.
 
int getNbAcc () const
 Return the number of accepted objective function evaluations.
 
int getNbOutOfBounds () const
 Return the total number of trial function evaluations that would have been out of bounds.
 
real_t getOptObj () const
 Return Optimal value of the objective.
 
int run ()
 Run the optimization algorithm.
 
int run (real_t toler, int max_it)
 Run the optimization algorithm.
 
real_t getSolution () const
 Return solution in the case of a one variable optimization.
 
void getSolution (Vect< real_t > &x) const
 Get solution vector.
 

Detailed Description

To solve an optimization problem with bound constraints.

Author
Rachid Touzani

Member Enumeration Documentation

◆ OptMethod

enum OptMethod

Choose optimization algorithm.

Enumerator
GRADIENT 

Gradient method

TRUNCATED_NEWTON 

Truncated Newton method

SIMULATED_ANNEALING 

Simulated annealing global optimization method

NELDER_MEAD 

Nelder-Mead global optimization method

NEWTON 

Newton's method

Constructor & Destructor Documentation

◆ OptSolver() [1/2]

OptSolver ( Vect< real_t > &  x)

Constructor using vector of optimization variables.

Parameters
[in]xVector having as size the number of optimization variables. It contains the initial guess for the optimization algorithm.
Remarks
After using the member function run, the vector x contains the obtained solution if the optimization procedure was successful

◆ OptSolver() [2/2]

OptSolver ( MyOpt opt,
Vect< real_t > &  x 
)

Constructor using vector of optimization variables.

Parameters
[in]optReference to instance of user defined optimization class. This class inherits from abstract class MyOpt. It must contain the member function double Objective(Vect<double> &x) which returns the value of the objective for a given solution vector x. The user defined class must contain, if the optimization algorithm requires it the member function Gradient(Vect<double> &x, Vect<double> &g) which stores the gradient of the objective in the vector g for a given optimization vector x. The user defined class must also contain, if the optimization algorithm requires it the member function
[in]xVector having as size the number of optimization variables. It contains the initial guess for the optimization algorithm.
Remarks
After using the member function run, the vector x contains the obtained solution if the optimization procedure was successful

Member Function Documentation

◆ getNbAcc()

int getNbAcc ( ) const

Return the number of accepted objective function evaluations.

This function is meaningful only if the Simulated Annealing algorithm is used

◆ getNbOutOfBounds()

int getNbOutOfBounds ( ) const

Return the total number of trial function evaluations that would have been out of bounds.

This function is meaningful only if the Simulated Annealing algorithm is used

◆ getSolution() [1/2]

real_t getSolution ( ) const

Return solution in the case of a one variable optimization.

In the case of a one variable problem, the solution value is returned, if the optimization procedure was successful

◆ getSolution() [2/2]

void getSolution ( Vect< real_t > &  x) const

Get solution vector.

The vector x contains the solution of the optimization problem. Note that if the constructor using an initial vector was used, the vector will contain the solution once the member function run has beed used (If the optimization procedure was successful)

Parameters
[out]xsolution vector

◆ getTemperature()

real_t getTemperature ( ) const

Return the final temperature.

This function is meaningful only if the Simulated Annealing algorithm is used

◆ run() [1/2]

int run ( )

Run the optimization algorithm.

This function runs the optimization procedure using default values for parameters. To modify these values, user the function run with arguments

◆ run() [2/2]

int run ( real_t  toler,
int  max_it 
)

Run the optimization algorithm.

Parameters
[in]tolerTolerance value for convergence testing
[in]max_itMaximal number of iterations to achieve convergence

◆ setBC()

void setBC ( const Vect< real_t > &  bc)

Prescribe boundary conditions as constraints.

This member function is useful in the case of optimization problems where the optimization variable vector is the solution of a partial differential equation. For this case, Dirichlet boundary conditions can be prescribed as constraints for the optimization problem

Parameters
[in]bcVector containing the values to impose on degrees of freedom. This vector must have been constructed using the Mesh instance.
Remarks
Only degrees of freedom with positive code are taken into account as prescribed

◆ setEqBound()

void setEqBound ( size_t  i,
real_t  b 
)

Define value to impose to a particular optimization variable.

Method to impose a value for a component of the optimization variable

Parameters
[in]iIndex of component to enforce (index starts from 1 )
[in]bValue to impose

◆ setEqConstraint()

void setEqConstraint ( string  exp,
real_t  penal = 1./OFELI_TOLERANCE 
)

Impose an equatity constraint by a penalty method.

The constraint is of the form F(x) = 0 where F is any function of the optimization variable vector v

Parameters
[in]expRegular expression defining the constraint (the function F
[in]penalPenalty parameter (large number) [Default: 1./DBL_EPSILON]

◆ setGradient() [1/3]

void setGradient ( function< real_t(real_t)>  f)

Define the derivative of the objective function by a user defined function.

Parameters
[in]fFunction given as a function of a real variable and returning the derivative of the objective value. This function can be defined by the calling program as a C-function and then cast to an instance of class function

◆ setGradient() [2/3]

void setGradient ( function< Vect< real_t >(Vect< real_t >)>  f)

Define the gradient of the objective function by a user defined function.

Parameters
[in]fFunction given as a function of a many real variables and returning the partial derivatives of the objective value. This function can be defined by the calling program as a C-function and then cast to an instance of class function

◆ setGradient() [3/3]

void setGradient ( string  exp,
int  i = 1 
)

Define a component of the gradient of the objective function to minimize by an algebraic expression.

Parameters
[in]expRegular expression defining the objective function
[in]iComponent of gradient [Default: 1]

◆ setHessian()

void setHessian ( string  exp,
int  i = 1,
int  j = 1 
)

Define an entry of the Hessian matrix.

Parameters
[in]expRegular expression defining the Hessian matrix entry
[in]ii-th row of Hessian matrix [Default: 1]
[in]jj-th column of Hessian matrix [Default: 1]

◆ setIneqConstraint()

void setIneqConstraint ( string  exp,
real_t  penal = 1./OFELI_TOLERANCE 
)

Impose an inequatity constraint by a penalty method.

The constraint is of the form F(x) <= 0 where F is any function of the optimization variable vector v

Parameters
[in]expRegular expression defining the constraint (the function F
[in]penalPenalty parameter (large number) [Default: 1./DBL_EPSILON]

◆ setLowerBound() [1/2]

void setLowerBound ( real_t  lb)

Define lower bound for optimization variable.

Case of a one-variable problem

Parameters
[in]lbLower value

◆ setLowerBound() [2/2]

void setLowerBound ( size_t  i,
real_t  lb 
)

Define lower bound for a particular optimization variable.

Method to impose a lower bound for a component of the optimization variable

Parameters
[in]iIndex of component to bound (index starts from 1 )
[in]lbLower bound

◆ setLowerBounds()

void setLowerBounds ( Vect< real_t > &  lb)

Define lower bounds for optimization variables.

Parameters
[in]lbVector containing lower values for variables

◆ setObjective() [1/3]

void setObjective ( function< real_t(real_t)>  f)

Define the objective function by a user defined one-variable function.

This function can be used in the case where a user defined function is to be given. To be used in the one-variable case.

Parameters
[in]fFunction given as a function of one real variable which is the optimization variable and returning the objective value. This function can be defined by the calling program as a C-function and then cast to an instance of class function

◆ setObjective() [2/3]

void setObjective ( function< real_t(Vect< real_t >)>  f)

Define the objective function by a user defined multi-variable function.

This function can be used in the case where a user defined function is to be given. To be used in the multivariable case.

Parameters
[in]fFunction given as a function of many real variables and returning the objective value. This function can be defined by the calling program as a C-function and then cast to an instance of class function

◆ setObjective() [3/3]

void setObjective ( string  exp)

Define the objective function to minimize by an algebraic expression.

Parameters
[in]expRegular expression defining the objective function

◆ setOptClass()

void setOptClass ( MyOpt opt)

Choose user defined optimization class.

Parameters
[in]optReference to inherited user specified optimization class

◆ setOptMethod()

void setOptMethod ( OptMethod  m)

Choose optimization method.

Parameters
[in]mEnumerated value to choose the optimization algorithm to use. Must be chosen among the enumerated values:
  • GRADIENT: Gradient steepest descent method with projection for bounded constrained problems
  • TRUNCATED_NEWTON: The Nash's Truncated Newton Algorithm, due to S.G. Nash (Newton-type Minimization via the Lanczos method, SIAM J. Numer. Anal. 21 (1984) 770-778).
  • SIMULATED_ANNEALING: Global optimization simulated annealing method. See Corana et al.'s article: "Minimizing Multimodal Functions of Continuous Variables with the Simulated Annealing Algorithm" in the September 1987 (vol. 13, no. 3, pp. 262-280) issue of the ACM Transactions on Mathematical Software.
  • NELDER_MEAD: Global optimization Nelder-Mead method due to John Nelder, Roger Mead (A simplex method for function minimization, Computer Journal, Volume 7, 1965, pages 308-313). As implemented by R. ONeill (Algorithm AS 47: Function Minimization Using a Simplex Procedure, Applied Statistics, Volume 20, Number 3, 1971, pages 338-345).

◆ setSAOpt()

void setSAOpt ( real_t  rt,
int  ns,
int  nt,
int &  neps,
int  maxevl,
real_t  t,
Vect< real_t > &  vm,
Vect< real_t > &  xopt,
real_t &  fopt 
)

Set Simulated annealing options.

Remarks
This member function is useful only if simulated annealing is used.
Parameters
[in]rtThe temperature reduction factor. The value suggested by Corana et al. is .85. See Goffe et al. for more advice.
[in]nsNumber of cycles. After ns*nb_var function evaluations, each element of vm is adjusted so that approximately half of all function evaluations are accepted. The suggested value is 20.
[in]ntNumber of iterations before temperature reduction. After nt*ns*n function evaluations, temperature (t) is changed by the factor rt. Value suggested by Corana et al. is max(100,5*nb_var). See Goffe et al. for further advice.
[in]nepsNumber of final function values used to decide upon termination. See eps. Suggested value is 4
[in]maxevlThe maximum number of function evaluations. If it is exceeded, the return code=1.
[in]tThe initial temperature. See Goffe et al. for advice.
[in]vmThe step length vector. On input it should encompass the region of interest given the starting value x. For point x[i], the next trial point is selected is from x[i]-vm[i] to x[i]+vm[i]. Since vm is adjusted so that about half of all points are accepted, the input value is not very important (i.e. is the value is off, OptimSA adjusts vm to the correct value).
[out]xoptoptimal values of optimization variables
[out]foptOptimal value of objective

◆ setTolerance()

void setTolerance ( real_t  toler)

Set error tolerance.

Parameters
[in]tolerError tolerance for termination. If the final function values from the last neps temperatures differ from the corresponding value at the current temperature by less than eps and the final function value at the current temperature differs from the current optimal function value by less than toler, execution terminates and the value 0 is returned.

◆ setUpperBound() [1/2]

void setUpperBound ( real_t  ub)

Define upper bound for optimization variable.

Case of a one-variable problem

Parameters
[in]ubUpper bound

◆ setUpperBound() [2/2]

void setUpperBound ( size_t  i,
real_t  ub 
)

Define upper bound for a particular optimization variable.

Method to impose an upper bound for a component of the optimization variable

Parameters
[in]iIndex of component to bound (index starts from 1 )
[in]ubUpper bound

◆ setUpperBounds()

void setUpperBounds ( Vect< real_t > &  ub)

Define upper bounds for optimization variables.

Parameters
[in]ubVector containing upper values for variables