Skip to content
Advertisement

Error while running CNN for 1 dimensional data in R

I am trying to run 1 dimensional CNN in R using keras package. I am trying to create one-dimensional Convolutional Neural Network (CNN) architecture with the following specification

enter image description here

library(keras)
library(deepviz)

#create a neural network with a convolutional layer and train the model
model <- keras_model_sequential() %>%
  layer_conv_1d(filters=32, kernel_size=4, activation="relu", input_shape=c(100, 10)) %>%
  layer_max_pooling_1d(pool_size=2) %>%
  layer_conv_1d(filters=64, kernel_size=4, activation="relu") %>%
  layer_max_pooling_1d(pool_size=5) %>%
  layer_conv_1d(filters=128, kernel_size=4, activation="relu") %>%
  layer_max_pooling_1d(pool_size=5) %>%
  layer_conv_1d(filters=256, kernel_size=4, activation="relu") %>%
  layer_max_pooling_1d(pool_size=5) %>%
  layer_dropout(rate=0.4) %>%
  layer_flatten() %>%
  layer_dense(units=100, activation="relu") %>%
  layer_dropout(rate=0.2) %>%
  layer_dense(units=1, activation="linear")

But it is giving me following error

Error in py_call_impl(callable, dots$args, dots$keywords) : ValueError: Negative dimension size caused by subtracting 4 from 1 for ‘conv1d_20/conv1d’ (op: ‘Conv2D’) with input shapes: [?,1,1,128], [1,4,128,256].

How to solve the error?

Another question, how to optimise the filters, kernel_size, pool_size, rate, units? In my question input_shape=c(100, 10) is an arbitrary value. How to decide about the input size?

Advertisement

Answer

You have too many Max-Pooling layers, the max pooling layer reduces the dimension of the inputted vector by factor of its parameter.

Try to reduce the pool_size parameters , or alternatively remove the last 2 max-pooling layers. A value you can try is pool_size=2 for all layers.

As for the parameters you should learn of the meaning of them: Here you can find an explanation of the convolution layer and max pooling layer parameters like filters , kernel size and pool size: Convolutional layer

The dropout layer is a regularization which maximize the effectiveness of the layer weights , every epoch it zeroes different percent (size of “rate” parameter) of the weights . the larger the rate – you have less overfitting but training time is longer. learn about it here: Dropout layer

The units is the size of the Fully Connected layer. Fully Connected layer

The input shape is a dimensions of your data, when the number of records does not count. In 1d vectors it is (N,C) when N is the vector length and C is number of channels you have, if you have 1 channel it is (N,1). In 2d vectors it is (Height,Width,Channels).

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement