Skip to content
Advertisement

Neural Network Results always the same

Edit: For anyone interested. I made it slight better. I used L2 regularizer=0.0001, I added two more dense layers with 3 and 5 nodes with no activation functions. Added doupout=0.1 for the 2nd and 3rd GRU layers.Reduced batch size to 1000 and also set loss function to mae

Important note: I discovered that my TEST dataframe wwas extremely small compared to the train one and that is the main Reason it gave me very bad results.

I have a GRU model which has 12 features as inputs and I’m trying to predict output power. I really do not understand though whether I choose

  • 1 layer or 5 layers
  • 50 neurons or 512 neuron
  • 10 epochs with a small batch size or 100 eopochs with a large batch size
  • Different optimizers and activation functions
  • Dropput and L2 regurlarization
  • Adding more dense layer.
  • Increasing and Decreasing learning rate

My results are always the same and doesn’t make any sense, my loss and val_loss loss is very steep in first 2 epochs and then for the rest it becomes constant with small fluctuations in val_loss

Here is my code and a figure of losses, and my dataframes if needed:

Dataframe1: https://drive.google.com/file/d/1I6QAU47S5360IyIdH2hpczQeRo9Q1Gcg/view Dataframe2: https://drive.google.com/file/d/1EzG4TVck_vlh0zO7XovxmqFhp2uDGmSM/view

JavaScript

enter image description here

Advertisement

Answer

I made it slightly better. I used L2 regularizer=0.0001, I added two more dense layers with 3 and 5 nodes with no activation functions. Added doupout=0.1 for the 2nd and 3rd GRU layers.Reduced batch size to 1000 and also set loss function to mae

Important note: I discovered that my TEST dataframe was extremely small compared to the train one and that is the main Reason it gave me very bad results.

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement