SciPy Optimization

SciPy Optimization

The scipy.optimize package provides several commonly used optimization algorithms. This module contains the following aspects −

  • Performs unconstrained and constrained minimization ( minimize() ) of multivariate scalar functions using various algorithms such as BFGS, Nelder-Mead simplex, Newton Conjugate Gradient, COBYLA, or SLSQP.
  • Global (brute force) optimization routines (e.g., anneal(), hop()).

  • Least squares (leastsq()) and curve fitting (curve_fit()) algorithms.

  • Scalar univariate function minimization (minimize_scalar()) and root finders (newton()).

  • Multivariate equation solvers (root()), using various algorithms (e.g., hybrid Powell, Levenberg-Marquardt, or large-scale methods such as Newton-Krylov).

Unconstrained and Constrained Minimization of Multivariate Scalar Functions

minimize() provides a general interface to the unconstrained and constrained minimization algorithms in scipy.optimation for multivariate scalar functions. To illustrate minimizing functions, consider the problem of minimizing the Rosenbrock function for a given NN variable:

f(x) = sum_{i = 1}^{N-1}:100(x_i – x_{i-1}^{2})

The minimum of this function is 0, which is reached when xi = 1.

Nelder-Mead Simple Algorithm

In the following example, the minimize() routine is used with the Nelder-Mead single-line algorithm (method = ‘Nelder-Mead’) (selected via the method parameter). Let’s consider the following example.

import numpy as np
from scipy.optimize import minimize

def rosen(x):

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead')

print(res.x)

The above program will produce the following output.

[7.93700741e+54 -5.41692163e+53 6.28769150e+53 1.38050484e+55 -4.14751333e+54]

The naive algorithm is probably the simplest way to minimize a reasonably good function. It requires only function evaluation and is a good choice for simple minimization problems. However, since it does not use any gradient evaluation, it may take longer to find the minimum.

Another optimization algorithm that only requires function calls to find the minimum is the Powell method, which can be implemented by setting method = ‘powell’ in the minimize() function.

Least Squares Method

Solve a nonlinear least squares problem with restrictions on the variables. Given the residual f(x) (an m-dimensional real function of n real variables) and the loss function rho(s) (a scalar function), the least squares method finds the local minimum of the cost function F(x). Let’s consider the following example.

In this example, we find the minimum of the Rosenbrock function, with no restrictions on the independent variables.

#Rosenbrock Function
def fun_rosenbrock(x):
return np.array([10 * (x[1] - x[0]**2), (1 - x[0])])

from scipy.optimize import least_squares
input = np.array([2, 2])
res = least_squares(fun_rosenbrock, input)

print res

Note that we only provide a vector of residuals. This algorithm constructs the cost function as the sum of the squared residuals, resulting in the Rosenbrock function. The exact minimum is at x = [1.0, 1.0].

The above program will produce the following output.

active_mask: array([ 0., 0.])
      cost: 9.8669242910846867e-30
      fun: array([ 4.44089210e-15, 1.11022302e-16])
      grad: array([ -8.89288649e-14, 4.44089210e-14])
      jac: array([[-20.00000015,10.],[ -1.,0.]])
   message: '`gtol` termination condition is satisfied.'
      nfev: 3
      njev: 3
   optimality: 8.8928864934219529e-14
      status: 1
      success: True x: array([ 1., 1.])

Root Finding

Let’s explore the role of root finding in SciPy.

Scalar Functions

If you have a single-variable equation, there are four different root-finding algorithms you can try. Each algorithm requires the endpoints of an interval where a root is expected (because the function’s sign changes). Generally, brentq is the best choice, but other methods may be useful in certain situations or for academic purposes.

Fixed Point Solving

A closely related problem to finding the zeros of a function is finding its fixed points. A fixed point of a function is a point at which evaluating the function returns g(x) = x. Clearly, the fixed points of gg are the roots of f(x) = g(x) – x. Equivalently, the roots of ff are fixed points of g(x) = f(x) + x. The routine fixed_point provides a simple iterative method using the Aitkens sequence acceleration to estimate the fixed points of gg, given a starting point.

System of Equations

Finding the roots of a nonlinear system of equations can be performed using the root() function. Several methods are available, including hybr (the default) and lm, which use Powell’s hybrid method and MINPACK’s Levenberg-Marquardt method, respectively.

The following example considers a transcendental equation in a single variable.

**x 2 + 2cos(x) = 0 **

A root of which can be found as follows −

import numpy as np
from scipy.optimize import root
def func(x):
   return x*2 + 2 * np.cos(x)
sol = root(func, 0.3)
printsol

The above program will produce the following output.

fjac: array([[-1.]])
fun: array([ 2.22044605e-16])
message: 'The solution converged.'
   nfev: 10
   qtf: array([ -2.77644574e-12])
      r: array([-3.34722409])
   status: 1
   success: True
      x: array([-0.73908513])

Leave a Reply

Your email address will not be published. Required fields are marked *