pdfo.pdfo(fun, x0, args=(), method=None, bounds=None, constraints=(), options=None)[source]#

Powell’s Derivative-Free Optimization solvers.

PDFO is an interface to call Powell’s derivatives-free optimization solvers: UOBYQA, NEWUOA, BOBYQA, LINCOA, and COBYLA. They are designed to minimize a scalar function of several variables subject to (possibly) bound constraints, linear constraints, and nonlinear constraints.


This function does not accept any ‘solver’ options. If you want to specify which solver to use, please use the method argument.


Objective function to be minimized.

fun(x, *args) -> float

where x is an array with shape (n,) and args is a tuple.

x0array_like, shape (n,)

Initial guess.

argstuple, optional

Extra arguments of the objective function. For example,

pdfo(fun, x0, args, ...)

is equivalent to

pdfo(lambda x: fun(x, *args), x0, ...)

method{‘uobyqa’, ‘newuoa’, ‘bobyqa’, ‘lincoa’, ‘cobyla’}, optional

Name of the Powell method that will be used. By default, ‘uobyqa’ is selected if the problem is unconstrained with 2 <= n <= 8, ‘newuoa’ is selected if the problem is unconstrained with n = 1 or n >= 9, ‘bobyqa’ is selected if the problem is bound-constrained, ‘lincoa’ is selected if the problem is linearly constrained, and ‘cobyla’ is selected otherwise.

bounds{scipy.optimize.Bounds, array_like, shape (n, 2)}, optional

Bound constraints of the problem. It can be one of the cases below.

  1. An instance of scipy.optimize.Bounds.

  2. An array with shape (n, 2). The bound constraints for x[i] are bounds[i, 0] <= x[i] <= bounds[i, 1]. Set bounds[i, 0] to \(-\infty\) if there is no lower bound, and set bounds[i, 1] to \(\infty\) if there is no upper bound.

constraints{dict, scipy.optimize.LinearConstraint, scipy.optimize.NonlinearConstraint, list}, optional

Constraints of the problem. It can be one of the cases below.

  1. A dictionary with fields:

    type{‘eq’, ‘ineq’}

    Whether the constraint is fun(x) = 0 or fun(x) >= 0.


    Constraint function.

  2. An instance of scipy.optimize.LinearConstraint.

  3. An instance of scipy.optimize.NonlinearConstraint.

  4. A list, each of whose elements are described in 1, 2, and 3.

optionsdict, optional

The options passed to the solver. Accepted options are:

radius_initfloat, optional

Initial value of the trust-region radius. Typically, it should be in the order of one tenth of the greatest expected change to the variables.

radius_finalfloat, optional

Final value of the trust-region radius. It must be smaller than or equal to options['radius_init'] and should indicate the accuracy required in the final values of the variables.

maxfevint, optional

Maximum number of function evaluations.

ftargetfloat, optional

Target value of the objective function. The optimization procedure is terminated when the objective function value of a nearly feasible point is less than or equal to this target.

nptint, optional

Number of interpolation points for NEWUOA, BOBYQA, and LINCOA.

quiet: bool, optional

Whether to suppress the output messages.

scalebool, optional

Whether to scale the problem according to the bound constraints.

eliminate_lin_eqbool, optional

Whether to eliminate linear equality constraints.

honour_x0bool, optional

Whether to honour the initial guess. It is only used by BOBYQA.

classicalbool, optional

Whether to use the classical version of Powell’s methods. It is highly discouraged in production.

debugbool, optional

Whether to perform debugging checks. It is highly discouraged in production.

chkfunvalbool, optional

Whether to check the value of the objective and constraint functions at the solution. This is only done in the debug mode, and requires one extra function evaluation. It is highly discouraged in production.


Result of the optimization procedure, with the following fields:


Description of the exit status specified in the status field (i.e., the cause of the termination of the solver).


Whether the optimization procedure terminated successfully.


Termination status of the optimization procedure.


Objective function value at the solution point.

xnumpy.ndarray, shape (n,)

Solution point.


Number of function evaluations.

fun_historynumpy.ndarray, shape (nfev,)

History of the objective function values.


Name of the Powell method used.

For constrained problems, the following fields are also returned:


Maximum constraint violation at the solution point.

maxcv_historynumpy.ndarray, shape (nfev,)

History of the maximum constraint violation.

For linearly and nonlinearly constrained problems, the following field is also returned:

constraints{numpy.ndarray, list}

The values of the constraints at the solution point. If a single constraint is passed, i.e., if the constraints argument is either a dict, a scipy.optimize.LinearConstraint, or a scipy.optimize.NonlinearConstraint, then the returned value is a numpy.ndarray, which is the value of the constraint at x. Otherwise, it is a list of numpy.ndarray, each element being the value of a constraint at x.

This function attempts to detect the infeasibility of constraints (however, if no such infeasibility is detected, it does not mean that the problem is feasible). If the optimization procedure terminated because some constraints are infeasible (i.e., when the exit status is -4), the following fields may also be returned:


Indices of the bounds that are infeasible.


Indices of the linear constraints that are infeasible.


Indices of the nonlinear constraints that are infeasible.

Finally, if warnings are raised during the optimization procedure, the following field is also returned:


A list of the warnings raised during the optimization procedure.

A description of the termination statuses is given below.

Exit status



The lower bound on the trust-region radius is reached.


The target value of the objective function is reached.


A trust-region step has failed to reduce the quadratic model.


The maximum number of function evaluations is reached.


Much cancellation occurred in a denominator.


Rounding errors are becoming damaging.


Rounding errors are damaging the solution point.


A denominator has become zero.


All variables are fixed by the bounds.


A linear feasibility problem has been received and solved.


A linear feasibility problem has been received but failed.


NaN is encountered in the solution point.


NaN is encountered in the objective/constraint function value. This is possible only in the classical mode.


NaN is encountered in the model parameter.


The problem is infeasible.



T. M. Ragonneau and Z. Zhang. PDFO: a cross-platform package for Powell’s derivative-free optimization solvers. arXiv:2302.13246 [math.OC], 2023.


The following example shows how to solve a simple optimization problem using pdfo. In practice, the problem considered below should be solved with a derivative-based method as it is a smooth problem for which the derivatives are known. We solve it here using pdfo only as an illustration.

We consider the 2-dimensional problem

\[\begin{split}\min_{x, y \in \R} \quad x^2 + y^2 \quad \text{s.t.} \quad \left\{ \begin{array}{l} 0 \le x \le 2,\\ 1 / 2 \le y \le 3,\\ 0 \le x + y \le 1,\\ x^2 - y \le 0. \end{array} \right.\end{split}\]

We solve this problem using pdfo starting from the initial guess \((x_0, y_0) = (0, 1)\) with at most 200 function evaluations.

>>> import numpy as np
>>> from pdfo import pdfo
>>> from scipy.optimize import Bounds, LinearConstraint, NonlinearConstraint
>>> # Build the constraints.
>>> bounds = Bounds([0, 0.5], [2, 3])
>>> linear_constraints = LinearConstraint([1, 1], 0, 1)
>>> nonlinear_constraints = NonlinearConstraint(lambda x: x[0]**2 - x[1], -np.inf, 0)
>>> constraints = [linear_constraints, nonlinear_constraints]
>>> # Solve the problem.
>>> options = {'maxfev': 200}
>>> res = pdfo(lambda x: x[0]**2 + x[1]**2, [0, 1], bounds=bounds, constraints=constraints, options=options)
>>> res.x
array([0. , 0.5])