Neural Network loss is significantly changing for same set of weights – Keras

Tags: , , , ,



model = keras.Sequential([
layers.Dense(10, activation = 'relu', weights=[zero_weights, zero_bias]),  
layers.Dense(24, activation = 'relu', weights=[one_weights, one_bias]),
layers.Dense(12, activation = 'relu', weights=[two_weights, two_bias]),
layers.Dense(1, weights=[three_weights, three_bias])])
# Compile Model
model.compile(loss='mse',
            optimizer=keras.optimizers.Adam(learning_rate=0.1),
            metrics=['mse'])
model.fit(inputs, targets,
          batch_size=5,
          epochs=100)
prediction_test = model.predict(X_test)
mse=mean_squared_error(prediction_test,y_test)
print(mse)

I use pre-initialized weights as initial weights of the neural network, but the loss value keeps changing every time I train the model. If the initial weights are the same, then the model should predict exactly the same value every time I train it. But the mse keeps changing. Is there anything that I am missing?

Answer

You have all the layers initialised to some weights(probably the weights you obtained from the previous training session), but the data you passed it for training will get shuffled differently every time you start the training process since model.fit() api has shuffle param set to True by default. If you set it to False you will not see model weights getting updated during subsequent runs if the training data is the same.

model.fit(inputs, targets,
          batch_size=5,
          epochs=100, shuffle=False)


Source: stackoverflow