SciPy Optimize Cookbook
Continuous and linear optimization with practical recipes for objectives, constraints, Jacobians, scaling, robust losses, and global strategies.
minimize: Bounds, Constraints, Jacobian
import numpy as np
from scipy.optimize import minimize, Bounds, LinearConstraint, NonlinearConstraint
# Objective (with gradient)
def f(x):
# Rosenbrock with gradient
a, b = 1.0, 100.0
x1, x2 = x
fx = (a - x1)**2 + b*(x2 - x1**2)**2
# gradient
dfdx1 = -2*(a - x1) - 4*b*x1*(x2 - x1**2)
dfdx2 = 2*b*(x2 - x1**2)
return fx, np.array([dfdx1, dfdx2])
def fun(x): return f(x)
def jac(x): return f(x)[1]
x0 = np.array([-1.2, 1.0])
# Box bounds
bounds = Bounds([ -2.0, -1.0 ], [ 2.0, 3.0 ])
# Linear constraint: A x = b
A = np.array([[1.0, 1.0]])
lin_con = LinearConstraint(A, [0.0], [0.0]) # x1 + x2 = 0
# Nonlinear constraint: g(x) >= 0
def g(x):
x1, x2 = x
val = x2 - x1**2 # >= 0 (above parabola)
# gradient of g
grad = np.array([-2*x1, 1.0])
return val, grad
nl_con = NonlinearConstraint(lambda x: g(x), 0.0, np.inf, jac=lambda x: g(x)[1])
res = minimize(fun, x0, method="trust-constr",
jac=jac, bounds=bounds, constraints=[lin_con, nl_con],
options=dict(verbose=3, maxiter=500))
print(res.x, res.success, res.message)
Prefer “trust‑constr” for constrained smooth problems. Provide analytic Jacobians for speed and reliability. Use scaling to normalize variable magnitudes.
least_squares: Robust Regression with Bounds
import numpy as np
from scipy.optimize import least_squares
def residuals(theta, t, y):
a, b, c = theta
return a*np.exp(-b*t) + c - y
t = np.linspace(0, 5, 200)
true = (2.5, 1.3, 0.5)
y = true*np.exp(-true[1]*t) + true[2] + 0.2*np.random.randn(t.size)
x0 = np.array([1.0, 1.0, 0.0])
bounds = ([0.0, 0.0, -np.inf], [5.0, 5.0, np.inf])
ls = least_squares(residuals, x0, bounds=bounds,
loss="soft_l1", f_scale=0.2, args=(t, y), jac="2-point")
print(ls.x, ls.cost, ls.status)
# Weighted residuals
w = 1.0 / (0.1 + t) # example weights
ls_w = least_squares(lambda th: residuals(th, t, y)*w, x0, bounds=bounds)
Use robust losses (“soft_l1”, “huber”) to reduce outlier influence. Weights model heteroskedastic noise. Supply Jacobian (“2‑point”, “3‑point”, or analytic) for speed.
curve_fit: Quick Nonlinear Fits
import numpy as np
from scipy.optimize import curve_fit
def model(x, a, b, c): return a*np.sin(b*x) + c
x = np.linspace(0, 2*np.pi, 200)
y = model(x, 2.0, 1.5, 0.2) + 0.2*np.random.randn(x.size)
p0 = (1.0, 1.0, 0.0)
popt, pcov = curve_fit(model, x, y, p0=p0, method="trf", bounds=([0,0,-1],[1]))
perr = np.sqrt(np.diag(pcov)) # 1-sigma
print(popt, perr)
curve_fit is convenient but limited. Use least_squares for robust losses, custom residuals, and finer control over bounds and Jacobians.
Linear Programming with linprog
import numpy as np
from scipy.optimize import linprog
# Minimize: c^T x
c = np.array([ 1.0, 2.0 ])
# Subject to: A_ub x <= b_ub
A_ub = np.array([
[ 1.0, 1.0],
[-1.0, 2.0],
])
b_ub = np.array([4.0, 2.0])
# Bounds on variables
bounds = [(0, None), (0, None)] # x1 >= 0, x2 >= 0
res = linprog(c, A_ub=A_ub, b_ub=b_ub, bounds=bounds, method="highs")
print(res.x, res.fun, res.status, res.message)
Use method=”highs” (HiGHS) for fast and robust LP/QP/MIP features (LP only in SciPy; integer programming requires external solvers).
Global Optimization: basinhopping, differential_evolution
import numpy as np
from scipy.optimize import basinhopping, differential_evolution
# Nonconvex function (Rastrigin)
def rastrigin(x):
A = 10
return A*len(x) + np.sum(x**2 - A*np.cos(2*np.pi*x))
# Bounds for DE
bounds = [(-5.12, 5.12)]*2
# Differential evolution (global)
de = differential_evolution(lambda x: rastrigin(x), bounds, maxiter=200, polish=True)
print("DE:", de.x, de.fun)
# Basinhopping (random restarts around local minimizer)
from scipy.optimize import minimize
x0 = np.array([3.0, 4.0])
bh = basinhopping(lambda x: rastrigin(x), x0, niter=100,
minimizer_kwargs=dict(method="L-BFGS-B", bounds=bounds))
print("BH:", bh.x, bh.fun)
For rugged landscapes, try global methods first to get good seeds, then polish with local solvers (L‑BFGS‑B, trust‑constr). Respect bounds and scale variables.
Scaling & Initialization
- OK Normalize variables to similar magnitudes (e.g., 0–1) for better conditioning.
- OK Provide a sensible initial guess close to expected solution when possible.
- WARN Poor scaling leads to slow or failed convergence; rescale objectives and constraints.
- OK Use logarithmic reparameterization for inherently positive variables.
# Example: positive parameters via log-param
def f_logparam(z):
x = np.exp(z) # x>0
return objective_in_x(x)
Callbacks, Stopping Criteria, Diagnostics
import numpy as np
from scipy.optimize import minimize
def fun(x): return (x-1)**2 + (x[1]+2)**2
iters = []
def cb(xk):
iters.append(xk.copy())
# Early stop example:
if len(iters) > 50:
return True # signal to stop
res = minimize(fun, x0=np.array([5.0,-5.0]), method="L-BFGS-B", callback=cb,
options=dict(maxiter=200, ftol=1e-10, gtol=1e-8))
print("iters:", len(iters), "x:", res.x, "fun:", res.fun)
Inspect res.status/message for termination reasons. Tune ftol/gtol/maxiter. Log progress with callbacks and plot objective vs. iteration.
Troubleshooting & FAQ
Doesn’t converge / oscillates
- Rescale variables and objective; supply gradients.
- Try a different algorithm (CG, BFGS, L‑BFGS‑B, trust‑constr).
- Improve initial guess or warm‑start from a global method.
Constraint violation
- Use trust‑constr and provide Jacobians for constraints.
- Relax tolerances slightly; check feasibility of x0.
Sensitive to outliers
- Switch to least_squares with robust loss (“soft_l1”, “huber”).
- Use weighted residuals.
Ill‑conditioned problem
- Regularize (add small penalties), reparameterize, or constrain.
- Check Hessian approximation; try trust‑region methods.
Slow evaluations
- Cache expensive computations; use JAX/Numba for gradients.
- Parallelize residuals for least_squares if independent.
LP infeasible/unbounded
- Double‑check signs in A,b; add bounds; inspect solver status.
- Simplify model to isolate infeasible constraints.
FAQ
When to use L‑BFGS‑B vs trust‑constr? L‑BFGS‑B for box‑bounded smooth problems; trust‑constr for general constraints with smooth Jacobians.
How to handle positivity constraints elegantly? Reparameterize with logs or use bounds/inequalities; logs often improve conditioning.
How to detect local vs global minima? Run from multiple initial points; compare objective values; use global methods to seed locals.
If this page helped, consider linking it under “Optimization,” “Numerical Methods,” or “Scientific Python.” Sharing supports more free content like this.
© 2025 Pythoneo · Designed to be highly linkable: comprehensive, evergreen, copy‑paste friendly, with robust patterns and fixes built‑in.