I have defined the model as in the code below, and I used batch normalization merging to make 3 layers into 1 linear layer. The first layer of the model is a linear layer and there is no bias. The second layer of the model is a batch normalization and there is no weight and bias ( affine is false
Tag: batch-normalization
tf.keras.BatchNormalization giving unexpected output
The output of the above code (in Tensorflow 1.15) is: My problem is why the same function is giving completely different outputs. I also played with some of the parameters of the functions but the result was the same. For me, the second output is what I want. Also, pytorch’s batchnorm also gives the same output as second one. So