Skip to content
Advertisement

Absurd solution using gurobi python in regression

So I am new to gurobi and I decided to start working with it on a well known problem as regression. I found this official notebook, where an L0 penalized regression model was solved and I took just the part of the regression model out of it. However, when I solve this problem in gurobi, I get a really strange solution, totally different from the actual correct regression solution.

The code I am running is:

import gurobipy as gp
from gurobipy import GRB
import numpy as np
from sklearn.datasets import load_boston
from itertools import product
boston = load_boston()
x = boston.data
x = x[:, [0, 2, 4, 5, 6, 7, 10, 11, 12]] # select non-categorical variables
response = boston.target

samples, dim = x.shape

regressor = gp.Model()

# Append a column of ones to the feature matrix to account for the y-intercept
x = np.concatenate([x, np.ones((samples, 1))], axis=1)

# Decision variables
beta = regressor.addVars(dim + 1, name="beta") # Beta

# Objective Function (OF): minimize 1/2 * RSS using the fact that
# if x* is a minimizer of f(x), it is also a minimizer of k*f(x) iff k > 0
Quad = np.dot(x.T, x)
lin = np.dot(response.T, x)
obj = sum(0.5 * Quad[i, j] * beta[i] * beta[j] for i, j in product(range(dim + 1), repeat=2))
obj -= sum(lin[i] * beta[i] for i in range(dim + 1))
obj += 0.5 * np.dot(response, response)

regressor.setObjective(obj, GRB.MINIMIZE)

regressor.optimize()
beta_sol_gurobi = np.array([beta[i].X for i in range(dim+1)])

The solution provided by this code is

array([1.22933632e-14, 2.40073891e-15, 1.10109084e-13, 2.93142174e+00,
       6.14486489e-16, 3.93021623e-01, 5.52707727e-15, 8.61271603e-03,
       1.55963041e-15, 3.19117429e-13])

While the true linear regression solution should be

from sklearn import linear_model
lr = linear_model.LinearRegression()
lr.fit(x, response)
lr.coef_
lr.intercept_

That yields,

array([-5.23730841e-02, -3.35655253e-02, -1.39501039e+01,  4.40955833e+00,
       -7.33680982e-03, -1.24312668e+00, -9.59615262e-01,  8.60275557e-03,
       -5.17452533e-01])
29.531492975441015

So gurobi solution is completely different. Any guess / suggestion on whats happening? Am I doing anything wrong here?

PD: I know that this problem can be solved using other packages, or even other optimization frameworks, but I am specially interested in solving it in gurobi python, since I want to start using gurobi in some more complex problems.

Advertisement

Answer

The wrong result is due to your decision variables. Since Gurobi assumes the lower bound 0 for all variables by default, you need to explicitly set the lower bound:

beta = regressor.addVars(dim + 1, lb = -GRB.INFINITY, name="beta") # Beta
User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement