I would need to optimize a function f
with respects to a vector x
, that takes as input a constant matrix m
and returns a scalar v >= 0
.
MWE with random numbers:
import numpy as np
from scipy.optimize import minimize
np.random.seed(1)
m = np.array([[1,0,0.15],[2,0,0.15],[1.5,0.2,0.2],[3,0.5,0.1],[2.2,0.1,0.15]])
x0 = np.random.rand(5)*2
def f(x, m):
pg = -np.concatenate((-arr[:, :2], x.reshape(-1, 1)), axis=1).sum(axis=1)
return sum(arr[:, 2] * pg)
res = minimize(
f, x0,
method='nelder-mead', args=(m,),
options={'xatol': 1e-8, 'maxiter': 1e+4, 'disp': True}
)
How do I set up the constraint for the output value? As far as I read in the doc I can only set constraints for the inputs. I read this post saying to use minimize_scalar
, but it can only be used when the input is scalar as well.
Advertisement
Answer
Simply add the constraint f(x,m) >= 0:
import numpy as np
from scipy.optimize import minimize
np.random.seed(1)
m = np.array([[1,0,0.15],[2,0,0.15],[1.5,0.2,0.2],[3,0.5,0.1],[2.2,0.1,0.15]])
x0 = np.random.rand(5)*2
def f(x, m):
pg = -np.concatenate((-arr[:, :2], x.reshape(-1, 1)), axis=1).sum(axis=1)
return sum(arr[:, 2] * pg)
# add the constraint f(x, m) >= 0
con = [{'type': 'ineq', 'fun': lambda x: f(x, m)}]
res = minimize(
f, x0,
constraints=con,
method='nelder-mead', args=(m,),
options={'xatol': 1e-8, 'maxiter': 1e+4, 'disp': True}
)
Alternatively, you can enforce a positive objective function value by minimizing some vector norm of your objective, e.g. f(x,m)**2. You wouldn’t need a constraint then.
PS: The second argument of your function should probably be arr
instead of m
.
PPS: Since both your objective function and the constraint are continuously differentiable, a gradient-based algorithm will very likely perform much better than Nelder-Mead, even if the gradient is approximated by finite differences.