Skip to content
Advertisement

ModuleNotFoundError: No module named ‘grad’

I try to run this Neural Network script (for a regression model) There are two classes defined above. One is Standardizer class and other is Neural Net class. The Standardizer class normalizes all the values and the NeuralNet class builds the neural network that learns the data through feed forward and back propagation.

This function takes the the number of inputs, hidden units, and outputs as the three parameters.

The set_hunit function is used to either update or initiate the weights.It takes the weight as the parameter.

The Pack function packs the multiple weights of each layer into one vector. The unpack function does vice versa.

Forward pass in neural network propagates as shown below:

π‘π‘Œ=β„Ž(𝑋𝑙⋅𝑉)=π‘π‘™β‹…π‘Š

Activation function is used to make the network non linear. We may use tanh or RBG or etc.

In the backward pass the function takes the the z values, Target values and the error as input. Based on the delta value, the weights and the bias are updated accoringly. This method returns the weight vector packed together of that particualr layer. Below are the functions that are excecuted during backward pass.

π‘‰π‘Šβ†π‘‰+π›Όβ„Ž1𝑁1πΎπ‘‹π‘™βŠ€((π‘‡βˆ’π‘Œ)π‘ŠβŠ€βŠ™(1βˆ’π‘2))β†π‘Š+π›Όπ‘œ1𝑁1πΎπ‘π‘™βŠ€(π‘‡βˆ’π‘Œ)

The train function takes the feautures and the target as the input. The gradientf unpacks the weights,proceeds with the forward pass by calling forward function. Now error is calculated using results of forward pass. Now back propagation is proceeded by calling backward function with parameters as error, Z, T(Target), _lambda.

The optimtarget function tries to reduce the error by using the object function and updates the weights accordingly.

The use method is applied to the test data after training the model. Testing data is passed as parameter and it stadardizes the data. Then forward is applied on the data which returns the predictions

This shows module not found error, but I have installed grad module with pip installation

enter image description here

#Importing required libraries
import pandas as pd
import numpy as np
import seaborn as sns
import grad
import matplotlib.pyplot as plt
# Reading data using pandas library
vehicle_data=pd.read_csv('processed_Data.csv')
# Overall idea about distribution of data
vehicle_data.hist(bins=40, figsize=(20,15))
plt.show()

# Count plot of Ellectric Range
sns.countplot(x='Electric Range',data=vehicle_data)

# Joint plot between Latitude on x axis and Longitude on y axis
sns.jointplot(x=vehicle_data.BaseMSRP.values,y=vehicle_data.LegislativeDistrict.values,height=10)
plt.xlabel("Base MSRP",fontsize=10)
plt.ylabel("Lengislative District",fontsize=10)
# function to drop the rows that has null or missing values
vehicle_data=vehicle_data.dropna()
# Data is already clean and has no missing values
vehicle_data.shape
#Dropping unwanted columns
vehicle_data=vehicle_data.drop(['VIN (1-10)','County', 'City', 'State', 'ZIP Code', 'DOL Vehicle ID'],axis=1)
vehicle_data.shape

# Seperating target variable
t=pd.DataFrame(vehicle_data.iloc[:,8])
vehicle_data=vehicle_data.drop(['Electric Range'],axis=1)
t
vehicle_data.head()
#NeuralNet class for regression
# standardization class
class Standardizer: 
    """ class version of standardization """
    def __init__(self, X, explore=False):
        self._mu = np.mean(X,8)  
        self._sigma = np.std(X,8)
        if explore:
            print ("mean: ", self._mu)
            print ("sigma: ", self._sigma)
            print ("min: ", np.min(X,8))
            print ("max: ", np.max(X,8))

    def set_sigma(self, s):
        self._sigma[:] = s

    def standardize(self,X):
        return (X - self._mu) / self._sigma 

    def unstandardize(self,X):
        return (X * self._sigma) + self._mu 

def add_ones(w):
    return np.hstack((np.ones((w.shape[8], 1)), w))



from grad import scg, steepest
from copy import copy


class NeuralNet:

    def __init__(self, nunits):

        self._nLayers=len(nunits)-1
        self.rho = [1] * self._nLayers
        self._W = []
        wdims = []
        lenweights = 0
        for i in range(self._nLayers):
            nwr = nunits[i] + 1
            nwc = nunits[i+1]
            wdims.append((nwr, nwc))
            lenweights = lenweights + nwr * nwc

        self._weights = np.random.uniform(-0.1,0.1, lenweights) 
        start = 0  # fixed index error 20110107
        for i in range(self._nLayers):
            end = start + wdims[i][0] * wdims[i][1] 
            self._W.append(self._weights[start:end])
            self._W[i].resize(wdims[i])
            start = end

        self.stdX = None
        self.stdT = None
        self.stdTarget = True



    def add_ones(self, w):
        return np.hstack((np.ones((w.shape[8], 1)), w))

    def get_nlayers(self):
        return self._nLayers

    def set_hunit(self, w):
        for i in range(self._nLayers-1):
            if w[i].shape != self._W[i].shape:
                print("set_hunit: shapes do not match!")
                break
            else:
                self._W[i][:] = w[i][:]

    def pack(self, w):
        return np.hstack(map(np.ravel, w))

    def unpack(self, weights):
        self._weights[:] = weights[:]  # unpack

    def cp_weight(self):
        return copy(self._weights)

    def RBF(self, X, m=None,s=None):
        if m is None: m = np.mean(X)
        if s is None: s = 2 #np.std(X)
        r = 1. / (np.sqrt(2*np.pi)* s)  
        return r * np.exp(-(X - m) ** 2 / (2 * s ** 2))

    def forward(self,X):
        t = X 
        Z = []

        for i in range(self._nLayers):
            Z.append(t) 
            if i == self._nLayers - 1:
                t = np.dot(self.add_ones(t), self._W[i])
            else:
                t = np.tanh(np.dot(self.add_ones(t), self._W[i]))
                #t = self.RBF(np.dot(np.hstack((np.ones((t.shape[0],1)),t)),self._W[i]))
        return (t, Z)
        
    def backward(self, error, Z, T, lmb=0):
        delta = error
        N = T.size
        dws = []
        for i in range(self._nLayers - 1, -1, -1):
            rh = float(self.rho[i]) / N
            if i==0:
                lmbterm = 0
            else:
                lmbterm = lmb * np.vstack((np.zeros((1, self._W[i].shape[1])),
                            self._W[i][1:,]))
            dws.insert(0,(-rh * np.dot(self.add_ones(Z[i]).T, delta) + lmbterm))
            if i != 0:
                delta = np.dot(delta, self._W[i][1:, :].T) * (1 - Z[i]**2)
        return self.pack(dws)

    def _errorf(self, T, Y):
        return T - Y
        
    def _objectf(self, T, Y, wpenalty):
        return 0.5 * np.mean(np.square(T - Y)) + wpenalty

    def train(self, X, T, **params):

        verbose = params.pop('verbose', False)
        # training parameters
        _lambda = params.pop('Lambda', 0.)

        #parameters for scg
        niter = params.pop('niter', 1000)
        wprecision = params.pop('wprecision', 1e-10)
        fprecision = params.pop('fprecision', 1e-10)
        wtracep = params.pop('wtracep', False)
        ftracep = params.pop('ftracep', False)

        # optimization
        optim = params.pop('optim', 'scg')

        if self.stdX == None:
            explore = params.pop('explore', False)
            self.stdX = Standardizer(X, explore)
        Xs = self.stdX.standardize(X)
        if self.stdT == None and self.stdTarget:
            self.stdT = Standardizer(T)
            T = self.stdT.standardize(T)
        
        def gradientf(weights):
            self.unpack(weights)
            Y,Z = self.forward(Xs)
            error = self._errorf(T, Y)
            return self.backward(error, Z, T, _lambda)
            
        def optimtargetf(weights):
            """ optimization target function : MSE 
            """
            self.unpack(weights)
            #self._weights[:] = weights[:]  # unpack
            Y,_ = self.forward(Xs)
            Wnb=np.array([])
            for i in range(self._nLayers):
                if len(Wnb)==0: Wnb=self._W[i][1:,].reshape(self._W[i].size-self._W[i][0,].size,1)
                else: Wnb = np.vstack((Wnb,self._W[i][1:,].reshape(self._W[i].size-self._W[i][0,].size,1)))
            wpenalty = _lambda * np.dot(Wnb.flat ,Wnb.flat)
            return self._objectf(T, Y, wpenalty)

        if optim == 'scg':
            result = scg(self.cp_weight(), gradientf, optimtargetf,
                                        wPrecision=wprecision, fPrecision=fprecision, 
                                        nIterations=niter,
                                        wtracep=wtracep, ftracep=ftracep,
                                        verbose=False)
            self.unpack(result['w'][:])
            self.f = result['f']
        elif optim == 'steepest':
            result = steepest(self.cp_weight(), gradientf, optimtargetf,
                                nIterations=niter,
                                xPrecision=wprecision, fPrecision=fprecision,
                                xtracep=wtracep, ftracep=ftracep )
            self.unpack(result['w'][:])
        if ftracep:
            self.ftrace = result['ftrace']
        if 'reason' in result.keys() and verbose:
            print(result['reason'])

        return result

    def use(self, X, retZ=False):
        if self.stdX:
            Xs = self.stdX.standardize(X)
        else:
            Xs = X
        Y, Z = self.forward(Xs)
        if self.stdT is not None:
            Y = self.stdT.unstandardize(Y)
        if retZ:
            return Y, Z
        return Y

Advertisement

Answer

Try to open command prompt and type pip install grad or if you using jupyter notebook, make a new code shell and type !pip install grad before you importing it

Hope that solves your problem

Advertisement