I have been trying to stack a single LSTM layer on top of Bert embeddings, but whilst my model starts to train it fails on the last batch and throws the following error message: This is how I build the model and I honestly cannot figure out what is going wrong here: this is the full output: The code runs
Tag: keras
Tensorflow Keras Tensor Multiplication with None as First Dimension
I’m using TensorFlow Keras backend and I have two tensors a, b of the same shape: (None, 4, 7), where None represents the batch dimension. I want to do matrix multiplication, and I’m expecting a result of (None, 4, 4). i.e. For each batch, do one matmul: (4,7)ยท(7,4) = (4,4) Here’s my code — This code gives a tensor of
AttributeError: module ‘keras.api._v2.keras.utils’ has no attribute ‘Sequential’ i have just started Neural network so help would be appriciated
Answer You should be using tf.keras.Sequential() or tf.keras.models.Sequential(). Also, you need to define a valid loss function. Here is a working example:
How to fix failed assertion `output channels should be divisible by group’ when trying to fit the model in Keras?
I’m trying to use ImageDataGenerator() for my image datasets. Here is my image augmentation code: Then use that plug into my model: Use EarlyStopping: Compile and Fit the model: That is when the code crash, and gives this error message. I try to change the output neurons but that doesn’t work. I don’t know what to do anymore. Please help
Chronologically Propagating Data into a Keras LSTM
I had a question about using LSTMs for processing data over time. That is, how can I feed data one-by-one into an LSTM, without the LSTM forgetting about my previous inputs? I had looked through the Keras “stateful” argument a bit, but it had only made me more confused. I’m not sure whether it’s relevant or not for my purposes.
Convert tfrecords to image
I found a training dataset which is a set of tfrecords files,im trying to convert them into images but with no results,is it possible to convert them to images ? Answer To find out what is inside a tf.record use tf.data.TFRecordDataset and tf.train.Example: To parse the records, use tf.data.TFRecordDataset with tf.io.parse_single_example and tf.io.parse_tensor: Also check the source code of Satellite
Rotate image for data augmentation using tf keras only in specific angles
In tf keras, it is possible to have a data augmentation layer that performs rotation on each given image during training, in the following way as the docs say: The factor argument indicates the value of maximum rotation if a float is given and indicates lower and upper limits if a tuple is given. For my specific application only specific
Tensorflow error: Failed to serialize message. For multi-modal dataset
I am trying to train a model, using TPU on Colab, which will take two np.ndarray inputs, one for an image of the shape, (150, 150, 3), and the other for an audio spectrogram image of the shape, (259, 128, 1). Now I have created my dataset using NumPy arrays as follows:- here shape of each is as follows:- I
Import onnx models to tensorflow2.x?
I created a modified lenet model using tensorflow that looks like this: When I finish training I save the model using tf.keras.models.save_model : Then I transform this model into onnx format using “tf2onnx” module: I want a method that can retrieve the same model into tensorflow2.x. I tried to use “onnx_tf” to transform the onnx model into tensorflow .pb model:
How to draw the precision-recall curve for a segmentation model?
I am using an U-Net for segmenting my data of interest. The masks are grayscale and of size (256,256,1). There are 80 images in the test set. The test images (X_ts) and their respective ground-truth masks (Y_ts) are constructed, saved, and loaded like this: The shape of Y_ts (ground truth) is therefore (80,256,256,1) and these are of type “Array of