Skip to content
Advertisement

Tensorflow returns ValueError: Cannot create a tensor proto whose content is larger than 2GB

 def loadData():
      images_dir = os.path.join(current_dir, 'image_data')
      images = []
      for each in os.listdir(images_dir):
          images.append(os.path.join(images_dir,each))
      all_images = tf.convert_to_tensor(images, dtype = tf.string)
      images_batch = tf.train.shuffle_batch(
                                [all_images], batch_size = BATCH_SIZE)
      return images_batch

returns

ValueError: Cannot create a tensor proto whose content is larger than 2GB.

I’m trying to load about 11GB of images. How can I overcome those limitation?

Edit: Possbile duplicate: You can split the output classes into multiple operations and concatenate them at the end is suggest, but I do not have multiple classes I can split.

Edit2: Solutions to this problem suggest using placeholders. So now I’m not sure who to use placeholders in that case and where I can feed the array of images to tensorflow.

Here’s a minimal version of my train function to show how I initialize the session.

def train():
     images_batch = loadData()
     sess = tf.Session()
     saver = tf.train.Saver()
     sess.run(tf.global_variables_initializer())
     sess.run(tf.local_variables_initializer())
     for i in range(EPOCH):
            train_image = sess.run(image_batch)

Advertisement

Answer

Using convert_to_tensor has the unexpected effect of adding your images to the computational graph, which has a hard limit of 2GB. If you hit this limit, you should reconsider how to feed images for the training process.

We already have a simple solution in TensorFlow, just use placeholders (tf.placeholder) and feed_dict in session.run. The only disadvantage in this case is that you have to produce batches of your data manually.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement