I’ve been trying to generate a custom dataset from two arrays. One with the shape (128,128,6) (satellite data with 6 channels), and the other with the shape (128,128,1) (binary mask). I have been using the function tf.data.Dataset.from_tensor_slices: What I get is this: However, when I try to run this through my model I get this error: (None, 2) since my
Tag: tensorflow
proper input and output shape of a keras Sequential model
I am trying to run a Keras sequential model but can’t get the right shape for the model to train on. I reshaped x and y to: Currently, both the input shape and output shape are: The dataset consists of 9766 inputs and 9766 outputs respectively. Each input is a single array of 500 values and each output is also
Image processing in Tensor flow TFX pipelines
I am trying to get a Tensorflow TFX pipeline up and running using the MNIST dataset. Setup pipeline paths Write the data to TF.record format and save in eval and train dirs. NOTE that the MNIST data starts as a numpy array 28×28 and is converted to a bytestring to enable it to be encoded as part of the Tf.record.
How do you add a dimension to the datapoints without using Lambda layer?
I am trying to classify the fashion_mnistdataset using the Conv2D layer and as I know it can be easily done using the following code: However, I am required to not use Lambda layer. so, the above solution is not correct. So, I am wondering, how can i classify the mnist_fashion dataset without using Lambda layer ? Update: When i add
Evaluate model on Testing Set after each epoch of training
I’m training a tensorflow model on image dataset for a classification task, we usually provide the training set and validation set to the model.fit method, we can later output model convergence graph of training and validation. I want to do the same with the testing set, in other words, I want to get the accuracy and loss of my model
Reading in file names from a tensor in Tensorflow
Context: I am trying to make a GAN to generate images from a large dataset, and have been running into OOM issues when loading in the training data. In an effort to solve this, I am trying to pass in a list of file directories and read them in as images only when needed. Issue: I do not know how
How to save Keras encoder and decoder separately?
I have created an autoencoder using a separate encoder and decoder as described in this link. Split autoencoder on encoder and decoder keras I am checkpointing my autoencoder as followed. How do I save the encoder and decoder separately corresponding to the autoencoder? Alternatively, can I extract deep encoder and decoder from my save autoencoder? Answer You could try to
TypeError: plotImages() got an unexpected keyword argument ‘n_images’ Python
The error i got: TypeError: plotImages() got an unexpected keyword argument ‘n_images’ Please do let me know if you have an idea. This is the code: Answer You defined the argument as nx_images in function definition, but doesn’t seem to use it anywhere in the code. Try changing it.
Keras CNN Model Typevalue errors when using predict method
I am have a keras model that is supposed to take a (150, 150, 1) grayscale image as it’s input and output an array of length 8. Here is my model code: When I try to use the .predict() method, I get this error: I had an ANN (non-CNN) model running earlier that was working fine. When I did some
MultiHeadAttention giving very different values between versions (Pytorch/Tensorflow
I’m trying to recreate a transformer that was written in Pytorch and make it Tensorflow. Everything was going pretty well until each version of MultiHeadAttention started giving extremely different outputs. Both methods are an implementation of multi-headed attention as described in the paper “Attention is all you Need”, so they should be able to achieve the same output. I’m converting