Skip to content
Advertisement

Tag: tensorflow

Tensorflow/keras: “logits and labels must have the same first dimension” How to squeeze logits or expand labels?

I’m trying to make a simple CNN classifier model. For my training images (BATCH_SIZEx227x227x1) and labels (BATCH_SIZEx7) datasets, I’m using numpy ndarrays that are fed to the model in batches via ImageDataGenerator. The loss function I’m using is tf.nn.sparse_categorical_crossentropy. The problem arises when the model tries to train; the model (batch size here is 1 for my simplified experimentations) outputs

Training a simple model in Tensorflow GPU slower than CPU

I have set up a simple linear regression problem in Tensorflow, and have created simple conda environments using Tensorflow CPU and GPU both in 1.13.1 (using CUDA 10.0 in the backend on an NVIDIA Quadro P600). However, it looks like the GPU environment always takes longer time than the CPU environment. The code I’m running is below. Here are some

Is tf.GradientTape in TF 2.0 equivalent to tf.gradients?

I am migrating my training loop to Tensorflow 2.0 API. In eager execution mode, tf.GradientTape replaces tf.gradients. The question is, do they have the same functionality? Specifically: In function gradient(): Is the parameter output_gradients equivalent to grad_ys in the old API? What about parameters colocate_gradients_with_ops. aggregation_method, gate_gradients of tf.gradients? Are they deprecated due to lack of use? Can they be

Tensorflow._api.v2.train has no attribute ‘AdamOptimizer’

When using in my Jupyter Notebook the following Error pops up: module ‘tensorflow._api.v2.train’ has no attribute ‘AdamOptimizer’ Tensorflow Version: 2.0.0-alpha0 Do you think the only possibility is to downgrade the TF version? Answer From https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers

Advertisement