I am doing segmentation and my dataset is kinda small (1840 images) so I would like to use data-augmentation. I am using the generator provided in the keras documentation which yield a tuple with a batch of images and corresponding masks that got augmented the same way. I am then training my model with this generator : But by using
Tag: tensorflow
Tensorflow returns ValueError: Cannot create a tensor proto whose content is larger than 2GB
returns I’m trying to load about 11GB of images. How can I overcome those limitation? Edit: Possbile duplicate: You can split the output classes into multiple operations and concatenate them at the end is suggest, but I do not have multiple classes I can split. Edit2: Solutions to this problem suggest using placeholders. So now I’m not sure who to
How to train keras models consecutively
I’m trying to train different models consecutively without needing to re-run my program or change my code all the time, so this way I can let my PC training different models. I use a for loop while feeding different information from a dictionary for building different models each time, and so I can train a new model each time de
Tensorflow/keras: “logits and labels must have the same first dimension” How to squeeze logits or expand labels?
I’m trying to make a simple CNN classifier model. For my training images (BATCH_SIZEx227x227x1) and labels (BATCH_SIZEx7) datasets, I’m using numpy ndarrays that are fed to the model in batches via ImageDataGenerator. The loss function I’m using is tf.nn.sparse_categorical_crossentropy. The problem arises when the model tries to train; the model (batch size here is 1 for my simplified experimentations) outputs
Tensorboard not found as magic function in jupyter
I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0. With the tensorboard version 1.13.1, and python 3.6. using … %tensorboard –logdir {logs_base_dir} I get the error : UsageError: Line magic function %tensorboard not found Do you have an idea what the problem could be? It seems that all versions are up to date and the command seems
How to tie weights between transposed layers?
I have try to tied weights in tensorflow 2.0 keras, with below code. but it shows this errors? does anyone know how to write tied weights dense layer ? Errors Answer It took much of my time to figure out, but I think this is the way of Tied Weights by subclassing Keras Dense layer. Hope it can help someone
Training a simple model in Tensorflow GPU slower than CPU
I have set up a simple linear regression problem in Tensorflow, and have created simple conda environments using Tensorflow CPU and GPU both in 1.13.1 (using CUDA 10.0 in the backend on an NVIDIA Quadro P600). However, it looks like the GPU environment always takes longer time than the CPU environment. The code I’m running is below. Here are some
Is tf.GradientTape in TF 2.0 equivalent to tf.gradients?
I am migrating my training loop to Tensorflow 2.0 API. In eager execution mode, tf.GradientTape replaces tf.gradients. The question is, do they have the same functionality? Specifically: In function gradient(): Is the parameter output_gradients equivalent to grad_ys in the old API? What about parameters colocate_gradients_with_ops. aggregation_method, gate_gradients of tf.gradients? Are they deprecated due to lack of use? Can they be
Tensorflow._api.v2.train has no attribute ‘AdamOptimizer’
When using in my Jupyter Notebook the following Error pops up: module ‘tensorflow._api.v2.train’ has no attribute ‘AdamOptimizer’ Tensorflow Version: 2.0.0-alpha0 Do you think the only possibility is to downgrade the TF version? Answer From https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers
Convert a variable sized numpy array to Tensorflow Tensors
I am trying Tensorflow 2.0 alpha preview and was testing the Eager execution . My doubt is that if you have a numpy array of variable size in middle like and so on for the rest of the array , how does one eagerly convert them to tensors. when I try or I get ValueError: Failed to convert numpy ndarray