I am working on a Neural Network problem, to classify data as 1 or 0. I am using Binary cross entropy loss to do this. The loss is fine, however, the accuracy is very low and isn’t improving. I am assuming I did a mistake in the accuracy calculation. After every epoch, I am calculating the correct predictions after thresholding the output, and dividing that number by the total number of the dataset. Is there any thing wrong I did in the accuracy calculation? And why isn’t it improving, but getting more worse? This is my code:
net = Model() criterion = torch.nn.BCELoss(size_average=True) optimizer = torch.optim.SGD(net.parameters(), lr=0.1) num_epochs = 100 for epoch in range(num_epochs): for i, (inputs,labels) in enumerate (train_loader): inputs = Variable(inputs.float()) labels = Variable(labels.float()) output = net(inputs) optimizer.zero_grad() loss = criterion(output, labels) loss.backward() optimizer.step() #Accuracy output = (output>0.5).float() correct = (output == labels).float().sum() print("Epoch {}/{}, Loss: {:.3f}, Accuracy: {:.3f}".format(epoch+1,num_epochs, loss.data[0], correct/x.shape[0]))
And this is the strange output I get:
Epoch 1/100, Loss: 0.389, Accuracy: 0.035 Epoch 2/100, Loss: 0.370, Accuracy: 0.036 Epoch 3/100, Loss: 0.514, Accuracy: 0.030 Epoch 4/100, Loss: 0.539, Accuracy: 0.030 Epoch 5/100, Loss: 0.583, Accuracy: 0.029 Epoch 6/100, Loss: 0.439, Accuracy: 0.031 Epoch 7/100, Loss: 0.429, Accuracy: 0.034 Epoch 8/100, Loss: 0.408, Accuracy: 0.035 Epoch 9/100, Loss: 0.316, Accuracy: 0.035 Epoch 10/100, Loss: 0.436, Accuracy: 0.035 Epoch 11/100, Loss: 0.365, Accuracy: 0.034 Epoch 12/100, Loss: 0.485, Accuracy: 0.031 Epoch 13/100, Loss: 0.392, Accuracy: 0.033 Epoch 14/100, Loss: 0.494, Accuracy: 0.030 Epoch 15/100, Loss: 0.369, Accuracy: 0.035 Epoch 16/100, Loss: 0.495, Accuracy: 0.029 Epoch 17/100, Loss: 0.415, Accuracy: 0.034 Epoch 18/100, Loss: 0.410, Accuracy: 0.035 Epoch 19/100, Loss: 0.282, Accuracy: 0.038 Epoch 20/100, Loss: 0.499, Accuracy: 0.031 Epoch 21/100, Loss: 0.446, Accuracy: 0.030 Epoch 22/100, Loss: 0.585, Accuracy: 0.026 Epoch 23/100, Loss: 0.419, Accuracy: 0.035 Epoch 24/100, Loss: 0.492, Accuracy: 0.031 Epoch 25/100, Loss: 0.537, Accuracy: 0.031 Epoch 26/100, Loss: 0.439, Accuracy: 0.033 Epoch 27/100, Loss: 0.421, Accuracy: 0.035 Epoch 28/100, Loss: 0.532, Accuracy: 0.034 Epoch 29/100, Loss: 0.234, Accuracy: 0.038 Epoch 30/100, Loss: 0.492, Accuracy: 0.027 Epoch 31/100, Loss: 0.407, Accuracy: 0.035 Epoch 32/100, Loss: 0.305, Accuracy: 0.038 Epoch 33/100, Loss: 0.663, Accuracy: 0.025 Epoch 34/100, Loss: 0.588, Accuracy: 0.031 Epoch 35/100, Loss: 0.329, Accuracy: 0.035 Epoch 36/100, Loss: 0.474, Accuracy: 0.033 Epoch 37/100, Loss: 0.535, Accuracy: 0.031 Epoch 38/100, Loss: 0.406, Accuracy: 0.033 Epoch 39/100, Loss: 0.513, Accuracy: 0.030 Epoch 40/100, Loss: 0.593, Accuracy: 0.030 Epoch 41/100, Loss: 0.265, Accuracy: 0.036 Epoch 42/100, Loss: 0.576, Accuracy: 0.031 Epoch 43/100, Loss: 0.565, Accuracy: 0.027 Epoch 44/100, Loss: 0.576, Accuracy: 0.030 Epoch 45/100, Loss: 0.396, Accuracy: 0.035 Epoch 46/100, Loss: 0.423, Accuracy: 0.034 Epoch 47/100, Loss: 0.489, Accuracy: 0.033 Epoch 48/100, Loss: 0.591, Accuracy: 0.029 Epoch 49/100, Loss: 0.415, Accuracy: 0.034 Epoch 50/100, Loss: 0.291, Accuracy: 0.039 Epoch 51/100, Loss: 0.395, Accuracy: 0.033 Epoch 52/100, Loss: 0.540, Accuracy: 0.026 Epoch 53/100, Loss: 0.436, Accuracy: 0.033 Epoch 54/100, Loss: 0.346, Accuracy: 0.036 Epoch 55/100, Loss: 0.519, Accuracy: 0.029 Epoch 56/100, Loss: 0.456, Accuracy: 0.031 Epoch 57/100, Loss: 0.425, Accuracy: 0.035 Epoch 58/100, Loss: 0.311, Accuracy: 0.039 Epoch 59/100, Loss: 0.406, Accuracy: 0.034 Epoch 60/100, Loss: 0.360, Accuracy: 0.035 Epoch 61/100, Loss: 0.476, Accuracy: 0.030 Epoch 62/100, Loss: 0.404, Accuracy: 0.034 Epoch 63/100, Loss: 0.382, Accuracy: 0.036 Epoch 64/100, Loss: 0.538, Accuracy: 0.031 Epoch 65/100, Loss: 0.392, Accuracy: 0.034 Epoch 66/100, Loss: 0.434, Accuracy: 0.033 Epoch 67/100, Loss: 0.479, Accuracy: 0.031 Epoch 68/100, Loss: 0.494, Accuracy: 0.031 Epoch 69/100, Loss: 0.415, Accuracy: 0.034 Epoch 70/100, Loss: 0.390, Accuracy: 0.036 Epoch 71/100, Loss: 0.330, Accuracy: 0.038 Epoch 72/100, Loss: 0.449, Accuracy: 0.030 Epoch 73/100, Loss: 0.315, Accuracy: 0.039 Epoch 74/100, Loss: 0.450, Accuracy: 0.031 Epoch 75/100, Loss: 0.562, Accuracy: 0.030 Epoch 76/100, Loss: 0.447, Accuracy: 0.031 Epoch 77/100, Loss: 0.408, Accuracy: 0.038 Epoch 78/100, Loss: 0.359, Accuracy: 0.034 Epoch 79/100, Loss: 0.372, Accuracy: 0.035 Epoch 80/100, Loss: 0.452, Accuracy: 0.034 Epoch 81/100, Loss: 0.360, Accuracy: 0.035 Epoch 82/100, Loss: 0.453, Accuracy: 0.031 Epoch 83/100, Loss: 0.578, Accuracy: 0.030 Epoch 84/100, Loss: 0.537, Accuracy: 0.030 Epoch 85/100, Loss: 0.483, Accuracy: 0.035 Epoch 86/100, Loss: 0.343, Accuracy: 0.036 Epoch 87/100, Loss: 0.439, Accuracy: 0.034 Epoch 88/100, Loss: 0.686, Accuracy: 0.023 Epoch 89/100, Loss: 0.265, Accuracy: 0.039 Epoch 90/100, Loss: 0.369, Accuracy: 0.035 Epoch 91/100, Loss: 0.521, Accuracy: 0.027 Epoch 92/100, Loss: 0.662, Accuracy: 0.027 Epoch 93/100, Loss: 0.581, Accuracy: 0.029 Epoch 94/100, Loss: 0.322, Accuracy: 0.034 Epoch 95/100, Loss: 0.375, Accuracy: 0.035 Epoch 96/100, Loss: 0.575, Accuracy: 0.031 Epoch 97/100, Loss: 0.489, Accuracy: 0.030 Epoch 98/100, Loss: 0.435, Accuracy: 0.033 Epoch 99/100, Loss: 0.440, Accuracy: 0.031 Epoch 100/100, Loss: 0.444, Accuracy: 0.033
Advertisement
Answer
Is x
the entire input dataset? If so, you might be dividing by the size of the entire input dataset in correct/x.shape[0]
(as opposed to the size of the mini-batch). Try changing this to correct/output.shape[0]