Skip to content
Advertisement

Fitting LSTM model

I am trying to fit LSTM model, but it gave me an error with the shape.

my dataset has 218 rows and 16 features including the targeted one. I split the data, %80 for training and %20 for testing, after compiling the model and run it, i got this error:

InvalidArgumentError:    Specified a list with shape [160,1] from a tensor with shape [14,1]
     [[{{node TensorArrayUnstack/TensorListFromTensor}}]]
     [[functional_7/lstm_6/PartitionedCall]] [Op:__inference_train_function_21740]
Function call stack:
train_function -> train_function -> train_function

Variable definitions: batch_size = 160 epochs = 20 timesteps = 15

Here are the training and testing sets after reshaping: y_train = (174, 15, 1) y_train = (174, 1, 1) x_test = (44, 15, 1) y_test = (44, 1, 1)

My model:

enter image description here

the problem happens in this code when I fit the model:

enter image description here

Advertisement

Answer

Two things: You have to change the shape of y_train if the input and the output of your model should have the same shape (check your model summary). Secondly, the number of samples, in your case 174, should be evenly divisible by the batch_size without remainder. So you can only use 1, 2, 3, 6, 29, 58, 87, or 174 as your batch size. Here is a working example:

import tensorflow as tf

batch_size = 2 
epochs = 20 
timesteps = 15

inputs_1_mae = tf.keras.layers.Input(batch_shape=(batch_size, timesteps, 1)) 
lstm_1_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(inputs_1_mae) 
lstm_2_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(lstm_1_mae) 
output_1_mae = tf.keras.layers.Dense(units = 1)(lstm_2_mae) 
regressor_mae = tf.keras.Model(inputs= inputs_1_mae ,outputs = output_1_mae) 
regressor_mae.compile (optimizer = "adam", loss = "mae") 
regressor_mae.summary() 

x_train = tf.random.normal((174, 15, 1)) 
y_train = tf.random.normal((174, 15, 1))

regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=2)
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(2, 15, 1)]              0         
                                                                 
 lstm (LSTM)                 (2, 15, 100)              40800     
                                                                 
 lstm_1 (LSTM)               (2, 15, 100)              80400     
                                                                 
 dense (Dense)               (2, 15, 1)                101       
                                                                 
=================================================================
Total params: 121,301
Trainable params: 121,301
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
87/87 [==============================] - 4s 5ms/step - loss: 0.8092
Epoch 2/2
87/87 [==============================] - 0s 5ms/step - loss: 0.8089
<keras.callbacks.History at 0x7f5820061250>

Update 1: To plot the mean square error of your training and test data, try something like this:

x_train = tf.random.normal((174, 15, 1)) 
y_train = tf.random.normal((174, 15, 1))

x_test = tf.random.normal((174, 15, 1)) 
y_test = tf.random.normal((174, 15, 1))

history = regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=25, validation_data=(x_test, y_test))

plt.plot(history.history['mean_absolute_error']) 
plt.plot(history.history['val_mean_absolute_error']) 
plt.title('model mean absolute error') 
plt.ylabel('mean_absolute_error') 
plt.xlabel('epoch') 
plt.legend(['train', 'test'], loc='upper left') 
plt.savefig('accuracy.png') 
plt.show()

enter image description here

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement