I am trying to execute a minimize function from Scipy however I am receiving wrong answers and it stops after a single iteration.
Problem introduction:
When I got this code running I would like to optimize a reactor size, the variables are therefore based on that but the Optimization function is for practice in this case. It returns the smallest value for all variables when just summing all variables as a function. However when adjusting the formula to 1/(all variables), I expect the maximum of the range for all variables but I receive the initial guess values.
Question:
Can someone explain me what error I made and how to make the minimize() working so I can apply the same method on a more complex situation?
Thanks in advance! Aike
Coding:
def Optimization2(INPUT): return 1/(INPUT[0]+INPUT[1]+INPUT[2]+INPUT[3]+INPUT[4]) # bounds feed_bound = (1,150) #molar feed P_bound = (1,93) #pressure T_bound = (900,1150) #Temperature L_bound = (0.1,4.5) #Length of reactor D_bound = (0.01,0.2) #Diameter of reactor bnds = (feed_bound, P_bound,T_bound,L_bound,D_bound) # initial guesses feed = 100# np.mean(feed_bound) P = 93#np.mean(P_bound) T = 1050 L = np.mean(L_bound) D = np.mean(D_bound) INPUT_0= [feed,P,T,L,D] print(INPUT_0) Initial = Optimization2(INPUT_0) # show initial objective print('Initial Objective: ' + str(Optimization2(INPUT_0)))
solution = minimize(Optimization2,bounds=bnds,method='SLSQP', x0=INPUT_0) INPUT = solution.x print(solution)
Wrong results:
fun: 0.0008029516502663793 jac: array([-6.4472988e-07, -6.4472988e-07, -6.4472988e-07, -6.4472988e-07, -6.4472988e-07]) message: 'Optimization terminated successfully' nfev: 6 nit: 1 njev: 1 status: 0 success: True x: array([1.00e+02, 9.30e+01, 1.05e+03, 2.30e+00, 1.05e-01])
Advertisement
Answer
If you look through the docs for minimize, you’ll find a numerical tolerance argument tol
. Setting it to 1e-16
causes your function to run for a good number of iterations, convering to each upper bound for each variable.
The default value of numerical tolerance is clearly too high for your example, meaning that when the routine tries to estimate derivatives, it finds that the differences in the function value are too small for it to care, so it terminates the optimization loop.
Another way of sidestepping this issue is multiplying your minimizable function by a factor of 1e9
. Think about why that’d make sense as an exercise ;)