pdfo.cobyla#
- pdfo.cobyla(fun, x0, args=(), bounds=None, constraints=(), options=None)[source]#
Constrained Optimization BY Linear Approximations.
Deprecated since version 1.3: Calling the COBYLA solver via the
cobyla
function is deprecated. The COBYLA solver remains available in PDFO. Call thepdfo
function with the argumentmethod='cobyla'
to use it.- Parameters:
- fun: callable
Objective function to be minimized.
fun(x, *args) -> float
where
x
is an array with shape (n,) and args is a tuple.- x0: ndarray, shape (n,)
Initial guess.
- args: tuple, optional
Parameters of the objective function. For example,
pdfo(fun, x0, args, ...)
is equivalent to
pdfo(lambda x: fun(x, *args), x0, ...)
- bounds: {Bounds, ndarray, shape (n, 2)}, optional
Bound constraints of the problem. It can be one of the cases below.
An instance of
Bounds
.An ndarray with shape (n, 2). The bound constraint for x[i] is
bounds[i, 0] <= x[i] <= bounds[i, 1]
. Setbounds[i, 0]
to \(-\infty\) orNone
if there is no lower bound, and setbounds[i, 1]
to \(\infty\) orNone
if there is no upper bound.
- constraints: {dict, LinearConstraint, NonlinearConstraint, list}, optional
Constraints of the problem. It can be one of the cases below.
A dictionary with fields:
- type: str
Constraint type:
'eq'
for equality constraints and'ineq'
for inequality constraints.- fun: callable
The constraint function.
When
type='eq'
, such a dictionary specifies an equality constraintfun(x) = 0
; whentype='ineq'
, it specifies an inequality constraintfun(x) >= 0
.An instance of
LinearConstraint
orNonlinearConstraint
.A list, each of whose elements is a dictionary described in 1, or an instance of
LinearConstraint
orNonlinearConstraint
.
- options: dict, optional
The options passed to the solver. It contains optionally:
- rhobeg: float, optional
Initial value of the trust region radius, which should be a positive scalar. Typically,
options['rhobeg']
should be in the order of one tenth of the greatest expected change to a variable. By default, it is1
if the problem is not scaled, and0.5
if the problem is scaled.- rhoend: float, optional
Final value of the trust region radius, which should be a positive scalar.
options['rhoend']
should indicate the accuracy required in the final values of the variables. Moreover,options['rhoend']
should be no more thanoptions['rhobeg']
and is by default1e-6
.- maxfev: int, optional
Upper bound of the number of calls of the objective function fun. Its value must be not less than
options['npt'] + 1
. By default, it is500 * n
.- ftarget: float, optional
Target value of the objective function. If a feasible iterate achieves an objective function value lower or equal to
`options['ftarget']
, the algorithm stops immediately. By default, it is \(-\infty\).- scale: bool, optional
Whether to scale the problem according to the bound constraints. By default, it is
False
. If the problem is to be scaled, thenrhobeg
andrhoend
will be used as the initial and final trust region radii for the scaled problem.- quiet: bool, optional
Whether the interface is quiet. If it is set to
True
, the output message will not be printed. This flag does not interfere with the warning and error printing.- classical: bool, optional
Whether to call the classical Powell code or not. It is not encouraged in production. By default, it is
False
.- eliminate_lin_eq: bool, optional
Whether the linear equality constraints should be eliminated. By default, it is
True
.- debug: bool, optional
Debugging flag. It is not encouraged in production. By default, it is
False
.- chkfunval: bool, optional
Flag used when debugging. If both
options['debug']
andoptions['chkfunval']
areTrue
, an extra function/constraint evaluation would be performed to check whether the returned values of objective function and constraint match the returnedx
. By default, it isFalse
.
- Returns:
- res: OptimizeResult
The results of the solver. Check
OptimizeResult
for a description of the attributes.
See also
References
[1]M. J. D. Powell. A direct search optimization method that models the objective and constraint functions by linear interpolation. In S. Gomez and J. P. Hennart, editors, Advances in Optimization and Numerical Analysis, 51–67. Springer, 1994.
Examples
The following example shows how to solve a simple nonlinearly constrained optimization problem. The problem considered below should be solved with a derivative-based method. It is used here only as an illustration.
We consider the 2-dimensional problem
\[\begin{split}\min_{x, y \in \R} \quad x^2 + y^2 \quad \text{s.t.} \quad \left\{ \begin{array}{l} 0 \le x \le 2,\\ 1 / 2 \le y \le 3,\\ 0 \le x + y \le 1,\\ x^2 - y \le 0. \end{array} \right.\end{split}\]We solve this problem using
cobyla
starting from the initial guess \((x_0, y_0) = (0, 1)\) with at most 200 function evaluations.>>> from pdfo import Bounds, LinearConstraint, NonlinearConstraint, cobyla >>> bounds = Bounds([0, 0.5], [2, 3]) >>> linear_constraints = LinearConstraint([1, 1], 0, 1) >>> nonlinear_constraints = NonlinearConstraint(lambda x: x[0]**2 - x[1], None, 0) >>> options = {'maxfev': 200} >>> res = cobyla(lambda x: x[0]**2 + x[1]**2, [0, 1], bounds=bounds, constraints=[linear_constraints, nonlinear_constraints], options=options) >>> res.x array([0. , 0.5])
Note that
cobyla
can also be used to solve unconstrained, bound-constrained, and linearly constrained problems.