Skip to content
Advertisement

What is meaning of separate ‘bias’ weights stored in Keras model?

Post-edit: Turns out I got confused while constantly playing with the three functions below.

model.weights
model.get_weights()
model.layer(i).get_weights()
  1. model.layer(i).get_weights() returns two separate arrays (without any tags) which are kernel and bias if bias exists in the model.
  2. model.get_weights() directly returns all the weights without any tags.
  3. model.weights returns weights and a bit of info such as name of the layer it belongs to and its shape. I was using this one for the experiment in the question.

What confused me was simply 1 and 3 above.

Note: I’ve decided not to delete the question because it received an answer and with the post-edit may it still help someone.


The question was…

After saving a Keras model, when I check the weights, I notice 2 separate biases.

Below is a part of weights listed by names.

conv2d/kernel:0
conv2d/bias:0

kernel ones store a bias array as their 2nd numpy array element which I knew as the original bias of the layer. Then, there is bias ones as well separately.

Which one serves what purpose? What is the difference between them?

Advertisement

Answer

The convolution layer (conv2d) has a kernel and a bias term and the dense layer (dense) has also a kernel and a bias term. The bias terms are here to give a new degree of freedom for each layer making the neural net more powerful to predict.

Advertisement