pdfo.pdfo(fun, x0, args=(), method=None, bounds=None, constraints=(), options=None)[source]#

Powell’s Derivative-Free Optimization solvers.

PDFO is an interface to call Powell’s derivatives-free optimization solvers: UOBYQA, NEWUOA, BOBYQA, LINCOA, and COBYLA. They are designed to minimize a scalar function of several variables subject to (possibly) simple bound constraints, linear constraints, and nonlinear constraints.

fun: callable

Objective function to be minimized.

fun(x, *args) -> float

where x is an array with shape (n,) and args is a tuple.

x0: ndarray, shape (n,)

Initial guess.

args: tuple, optional

Parameters of the objective function. For example,

pdfo(fun, x0, args, ...)

is equivalent to

pdfo(lambda x: fun(x, *args), x0, ...)

method: str, optional

Name of the Powell method that will be used. By default, the most appropriate method will be chosen automatically. The available methods are: 'uobyqa', 'newuoa', 'bobyqa', 'lincoa', and 'cobyla'.

bounds: {Bounds, ndarray, shape (n, 2)}, optional

Bound constraints of the problem. It can be one of the cases below.

  1. An instance of Bounds.

  2. An ndarray with shape (n, 2). The bound constraint for x[i] is bounds[i, 0] <= x[i] <= bounds[i, 1]. Set bounds[i, 0] to \(-\infty\) or None if there is no lower bound, and set bounds[i, 1] to \(\infty\) or None if there is no upper bound.

constraints: {dict, LinearConstraint, NonlinearConstraint, list}, optional

Constraints of the problem. It can be one of the cases below.

  1. A dictionary with fields:

    type: str

    Constraint type: 'eq' for equality constraints and 'ineq' for inequality constraints.

    fun: callable

    The constraint function.

    When type='eq', such a dictionary specifies an equality constraint fun(x) = 0; when type='ineq', it specifies an inequality constraint fun(x) >= 0.

  2. An instance of LinearConstraint or NonlinearConstraint.

  3. A list, each of whose elements is a dictionary described in 1, or an instance of LinearConstraint or NonlinearConstraint.

options: dict, optional

The options passed to the solver. It contains optionally:

rhobeg: float, optional

Initial value of the trust region radius, which should be a positive scalar. Typically, options['rhobeg'] should be in the order of one tenth of the greatest expected change to a variable. By default, it is 1 if the problem is not scaled (but min(1, min(ub - lb) / 4) if the solver is BOBYQA), 0.5 if the problem is scaled.

rhoend: float, optional

Final value of the trust region radius, which should be a positive scalar. options['rhoend'] should indicate the accuracy required in the final values of the variables. Moreover, options['rhoend'] should be no more than options['rhobeg'] and is by default 1e-6.

maxfev: int, optional

Upper bound of the number of calls of the objective function fun. Its value must be not less than options['npt'] + 1. By default, it is 500 * n.

npt: int, optional

Number of interpolation points of each model used in Powell’s Fortran code. It is used only if the solver is NEWUOA, BOBYQA, or LINCOA.

ftarget: float, optional

Target value of the objective function. If a feasible iterate achieves an objective function value lower or equal to `options['ftarget'], the algorithm stops immediately. By default, it is \(-\infty\).

scale: bool, optional

Whether to scale the problem according to the bound constraints. By default, it is False. If the problem is to be scaled, then rhobeg and rhoend will be used as the initial and final trust region radii for the scaled problem.

honour_x0: bool, optional

Whether to respect the user-defined x0. By default, it is False. It is used only if the solver is BOBYQA.

quiet: bool, optional

Whether the interface is quiet. If it is set to True, the output message will not be printed. This flag does not interfere with the warning and error printing.

classical: bool, optional

Whether to call the classical Powell code or not. It is not encouraged in production. By default, it is False.

eliminate_lin_eq: bool, optional

Whether the linear equality constraints should be eliminated. By default, it is True.

debug: bool, optional

Debugging flag. It is not encouraged in production. By default, it is False.

chkfunval: bool, optional

Flag used when debugging. If both options['debug'] and options['chkfunval'] are True, an extra function/constraint evaluation would be performed to check whether the returned values of objective function and constraint match the returned x. By default, it is False.

res: OptimizeResult

The results of the solver. Check OptimizeResult for a description of the attributes.

See also


Unconstrained Optimization BY Quadratic Approximation.


NEW Unconstrained Optimization Algorithm.


Bounded Optimization BY Quadratic Approximations.


LINearly Constrained Optimization Algorithm.


Constrained Optimization BY Linear Approximations.



T. M. Ragonneau and Z. Zhang. PDFO: a cross-platform package for Powell’s derivative-free optimization solvers. arXiv:2302.13246 [math.OC], 2023.


The following example shows how to solve a simple constrained optimization problem. The problem considered below should be solved with a derivative-based method. It is used here only as an illustration.

We consider the 2-dimensional problem

\[\begin{split}\min_{x, y \in \R} \quad x^2 + y^2 \quad \text{s.t.} \quad \left\{ \begin{array}{l} 0 \le x \le 2,\\ 1 / 2 \le y \le 3,\\ 0 \le x + y \le 1,\\ x^2 - y \le 0. \end{array} \right.\end{split}\]

We solve this problem using pdfo starting from the initial guess \((x_0, y_0) = (0, 1)\) with at most 200 function evaluations.

>>> from pdfo import Bounds, LinearConstraint, NonlinearConstraint, pdfo
>>> bounds = Bounds([0, 0.5], [2, 3])
>>> linear_constraints = LinearConstraint([1, 1], 0, 1)
>>> nonlinear_constraints = NonlinearConstraint(lambda x: x[0]**2 - x[1], None, 0)
>>> options = {'maxfev': 200}
>>> res = pdfo(lambda x: x[0]**2 + x[1]**2, [0, 1], bounds=bounds, constraints=[linear_constraints, nonlinear_constraints], options=options)
>>> res.x
array([0. , 0.5])