Say I have a function f(x,y,z)
I want to minimize using scipy.optimize.minimize
. I want to minimize it subject to the constraint x < y < z
.
I don't think I can use the bounds argument to do this because it does not accept variable-dependent bounds (am I wrong?)
Another option is to redefine my function so that it is large when the inequality is not satisfied:
import numpy as np
f = lambda x,y,z, : x**2 + y**2 + z**2
def new_f(x,y,z):
if x < y < z:
res = f(x,y,z)
else:
res = np.inf
return res
and that should work fine for optimizer that are not gradient based, at least.
But I was wondering if there were opinions about the most proper and robust way to do this.