Skip to content
Advertisement

Tag: neural-network

keras apply threshold for loss function

I am developing a Keras model. My dataset is badly unbalanced, so I want to set a threshold for training and testing. If I’m not mistaken, when doing a backward propagation, neural network checks the predicted values with the original ones and calculate the error and based on the error, set new weights for neurons. As I know, Keras uses

Keras: Adding MDN Layer to LSTM Network

My question in brief: Is the Long Short Term Memory Network detailed below appropriately designed to generate new dance sequences, given dance sequence training data? Context: I am working with a dancer who wishes to use a neural network to generate new dance sequences. She sent me the 2016 chor-rnn paper that accomplished this task using an LSTM network with

Calculate the accuracy every epoch in PyTorch

I am working on a Neural Network problem, to classify data as 1 or 0. I am using Binary cross entropy loss to do this. The loss is fine, however, the accuracy is very low and isn’t improving. I am assuming I did a mistake in the accuracy calculation. After every epoch, I am calculating the correct predictions after thresholding

How do you use Keras LeakyReLU in Python?

I am trying to produce a CNN using Keras, and wrote the following code: I want to use Keras’s LeakyReLU activation layer instead of using Activation(‘relu’). However, I tried using LeakyReLU(alpha=0.1) in place, but this is an activation layer in Keras, and I get an error about using an activation layer and not an activation function. How can I use

Why do we need to call zero_grad() in PyTorch?

Why does zero_grad() need to be called during training? Answer In PyTorch, for every mini-batch during the training phase, we typically want to explicitly set the gradients to zero before starting to do backpropragation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes. This accumulating behaviour is convenient while training RNNs or when we

Advertisement