Why is TensorFlow 2 much slower than TensorFlow 1?

It’s been cited by many users as the reason for switching to Pytorch, but I’ve yet to find a justification / explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 – with TF1 running anywhere from 47% to 276% faster. My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer – am already familiar with broad concepts. Relevant Git Specs: CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10, GTX 1070 Benchmark results: UPDATE: Disabling Eager Execution

How to load models trained on GPU into CPU (system) memory?

I trained a model on a GPU and now I am trying to evaluate it on CPU (GPU is being used for a different train run). However when I try to load it using: I am getting a CUDA_ERROR_OUT_OF_MEMORY: (I tried also setting compile=True with the same result.) It seems that the model is being loaded into GPU which is already used by another instance. How can I force keras/tensorflow to load it into system memory and execute it on CPU? Answer Can you define everything inside with tf.device(‘/cpu:0′): except the libraries import part and test. If that doesn’t work

How to generate CNN heatmaps using built-in Keras in TF2.0 (tf.keras)

I used to generate heatmaps for my Convolutional Neural Networks, based on the stand-alone Keras library on top of TensorFlow 1. That worked fine, however, after my switch to TF2.0 and built-in tf….

keras apply threshold for loss function

I am developing a Keras model. My dataset is badly unbalanced, so I want to set a threshold for training and testing. If I’m not mistaken, when doing a backward propagation, neural network checks the …

Concatenate two models with tensorflow.keras

I’m currently studying neural network models for image analysis, with the MNIST dataset. I first used only the image to build a first model. Then I created a additionnal variable, which is : 0 when the digit is actually between 0 and 4, and 1 when it’s greater or equal than 5. Therefore, I want to build a model that can take these two informations : the image of the digit, and that additionnal variable I juste created. I created the two first models, one for the image and one for the exogenous variable, as follow : Then I would

During creating VAE model throws exception “you should implement a `call` method.”

I want to create VAE(variational autoencoder). During model creating it throws exception. When subclassing the Model class, you should implement a call method. I am using Tensorflow 2.0 Models with names I want to get model. Answer The problem is here: You are passing three arguments to the construction, where only two are needed (inputs and outputs). Models do not have names. The problem is that three parameters will break the detection of network or sub-classed model as shown in the keras source code. So just replace the code with:

Model not training and negative loss when whitening input data

I am doing segmentation and my dataset is kinda small (1840 images) so I would like to use data-augmentation. I am using the generator provided in the keras documentation which yield a tuple with a batch of images and corresponding masks that got augmented the same way. I am then training my model with this generator : But by using this I get negative loss and the model is not training: I also want to add that the model is training if I don’t use featurewise_center and featurewise_std_normalization. But I am using a model with batch normalization that performs way

Tensorflow/keras: “logits and labels must have the same first dimension” How to squeeze logits or expand labels?

I’m trying to make a simple CNN classifier model. For my training images (BATCH_SIZEx227x227x1) and labels (BATCH_SIZEx7) datasets, I’m using numpy ndarrays that are fed to the model in batches via …

Tensorflow 2.0 – AttributeError: module ‘tensorflow’ has no attribute ‘Session’

When I am executing the command sess = tf.Session() in Tensorflow 2.0 environment, I am getting an error message as below: System Information: OS Platform and Distribution: Windows 10 Python Version: 3.7.1 Tensorflow Version: 2.0.0-alpha0 (installed with pip) Steps to reproduce: Installation: pip install –upgrade pip pip install tensorflow==2.0.0-alpha0 pip install keras pip install numpy==1.16.2 Execution: Execute command: import tensorflow as tf Execute command: sess = tf.Session() Answer According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session() https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0 To get TF 1.x like behaviour in TF 2.0 one can run but then one

InvalidArgumentError: cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul]

Can somebody explain, how does TensorFlow’s eager mode work? I am trying to build a simple regression as follows: Gradient output: [None, None, None, None, None, None] The error is following: Edit I updated my code. Now, the problem comes in gradients calculation, it is returning zero. I have checked the loss value that is non-zero. Answer Part 1: The problem is indeed the datatype of your input. By default your keras model expects float32 but you are passing a float64. You can either change the dtype of the model or change the input to float32. To change your model: