I’m trying to plot 2 arrays but I’m receiving this error while passing to the function. Not really sure what is causing this error.
1 def plotModel(x, y, w): 2 plt.plot(x[:,1], y, "x") 3 plt.plot(x[:,1], [i+j for i, j in x * w], "r-") 4 plt.show()
I’m using the function like this plotModel(x,y,theta) but looks like the error is between x and theta.
Also, these are my 2 arrays:
x.shape() = (31, 2) array([[1.00000000e+00, 1.22526205e+06], [1.00000000e+00, 1.21287065e+06], [1.00000000e+00, 1.13999016e+06], [1.00000000e+00, 1.10700077e+06], [1.00000000e+00, 1.13774633e+06], [1.00000000e+00, 1.07849762e+06], [1.00000000e+00, 1.03450001e+06], [1.00000000e+00, 1.01952399e+06], [1.00000000e+00, 1.00634526e+06], [1.00000000e+00, 9.77835760e+05], [1.00000000e+00, 1.07499451e+06], [1.00000000e+00, 1.10382333e+06], [1.00000000e+00, 1.09192311e+06], [1.00000000e+00, 1.07565154e+06], [1.00000000e+00, 1.17271256e+06], [1.00000000e+00, 1.17740430e+06], [1.00000000e+00, 1.14566030e+06], [1.00000000e+00, 1.15863935e+06], [1.00000000e+00, 1.08100175e+06], [1.00000000e+00, 1.16659760e+06], [1.00000000e+00, 1.13621559e+06], [1.00000000e+00, 1.15223072e+06], [1.00000000e+00, 1.17947384e+06], [1.00000000e+00, 1.16438919e+06], [1.00000000e+00, 1.13504714e+06], [1.00000000e+00, 1.13989375e+06], [1.00000000e+00, 1.02480001e+06], [1.00000000e+00, 1.00015121e+06], [1.00000000e+00, 1.00000281e+06], [1.00000000e+00, 9.38166140e+05], [0.00000000e+00, 9.40500380e+05]]) theta.shape() = (2,31) [[-6.40870567e+70 -5.76372638e+70 -1.97025622e+70 -2.53140061e+69 -1.85346361e+70 1.23046480e+70 3.52056495e+70 4.30007514e+70 4.98603539e+70 6.46997078e+70 1.41280363e+70 -8.77525366e+68 5.31660554e+69 1.37860485e+70 -3.67347539e+70 -3.91768308e+70 -2.26539017e+70 -2.94095696e+70 1.10012344e+70 -3.35518831e+70 -1.77378774e+70 -2.60738419e+70 -4.02540379e+70 -3.24023934e+70 -1.71296927e+70 -1.96523802e+70 4.02545536e+70 5.30843915e+70 5.31616345e+70 8.53479663e+70 8.41329814e+70] [-7.26421600e+76 -6.53313721e+76 -2.23326947e+76 -2.86932211e+75 -2.10088600e+76 1.39472189e+76 3.99053187e+76 4.87410037e+76 5.65163076e+76 7.33365951e+76 1.60140148e+76 -9.94667900e+74 6.02632935e+75 1.56263744e+76 -4.16385463e+76 -4.44066206e+76 -2.56780142e+76 -3.33355091e+76 1.24698101e+76 -3.80307880e+76 -2.01057405e+76 -2.95544888e+76 -4.56276262e+76 -3.67278507e+76 -1.94163680e+76 -2.22758139e+76 4.56282107e+76 6.01707281e+76 6.02582825e+76 9.67412290e+76 9.53640534e+76]]
How can I solve this problem?
Whole error message:
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-125-48f3d06cf419> in <module> 2 theta = gradientDescent(x, y, theta, alpha, m, numIterations) 3 print(theta) ----> 4 plotModel(x, y, theta) <ipython-input-123-ff3e0fb990f7> in plotModel(x, y, w) 1 def plotModel(x, y, w): 2 plt.plot(x[:,1], y, "x") ----> 3 plt.plot(x[:,1], [i+j for i, j in x * w], "r-") 4 plt.show() ValueError: operands could not be broadcast together with shapes (31,2) (2,31)
This is the gradientDescendent function that is being used.
def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() cost = None for i in range(0, numIterations): hypothesis = np.dot(x, theta) loss = hypothesis - y cost = np.sum(loss ** 2) / (2 * m) gradient = np.dot(xTrans, loss) / m theta = theta - alpha * gradient print("Iteration %d | Cost: %f" % (numIterations, cost)) return theta
Advertisement
Answer
There seem to be a couple of problems here. However, the problem you are asking about has to do with the difference between broadcasted multiplication and the dot product.
In your gradient descent function, the hypotheses is calculated like so:
hypothesis = np.dot(x, theta)
But in the code that plots the hypothesis, comparing it to the ground truth (y
), the hypothesis is calculated as
[i+j for i, j in x * w]
If theta
were the right shape (it should be (2, 1)
) then this would sort of be correct. You’d get a broadcasted multiplication, which performs the first step of the dot product, and then in the list comprehension, the sum completes the second step.
But really there’s no point in doing it that way. You might as well just do the same thing as in the gradient descent function (where theta
is now named w
, presumably short for “weight”).
plt.plot(x[:,1], np.dot(x * w), “r-“)
However, all this will break anyway if the shape of your theta matrix is (2, 31)
. Unless I’ve horribly misunderstood the intent behind this code, that’s way off. For single-variable linear regression with a bias term, the weight matrix needs just two values — the bias term and the slope of the line.