I’m trying to recreate this work using my own dataset: https://www.kaggle.com/code/amyjang/tensorflow-pneumonia-classification-on-x-rays/notebook I’ve made some slight tweaks to the code to accommodate my data but I don’t think that is what is causing an issue here; it could be though of course. My code: And the error: I can gather from the error that I have a mismatch in resizing, I
Tag: tensorflow
Access denied to subclassed Keras model when loaded from a .py script
I have the following subclassed Keras model which I have already trained. I want to be able to call all the methods in B_frame_CNN (e.g., get_embedding()) on the loaded model. The following code works perfectly and does what I need when run in an ipython notebook. However, when I run it in a python script (.py), I get the following
Model was constructed with shape (None, 65536) but it was called on an input with incompatible shape (None, 65536, None)
For reference the full error is here: I am using kymatio to classify audio signals. Before constructing the model I use tensorflow’s tf.keras.utils.audio_dataset_from_directory to create the training and testing sets. The audio samples are of shape (65536,) before the sets are created. To create the sets I use the following code: The element_spec of the train_dataset is (TensorSpec(shape=(None, 65536, None),
Masking layer vs attention_mask parameter in MultiHeadAttention
I use MultiHeadAttention layer in my transformer model (my model is very similar to the named entity recognition models). Because my data comes with different lengths, I use padding and attention_mask parameter in MultiHeadAttention to mask padding. If I would use the Masking layer before MultiHeadAttention, will it have the same effect as attention_mask parameter? Or should I use both:
How do I crop an image based on custom mask
I start my model and get a mask prediction of the area I would like to crop. This code is the closest I have gotten to the desired image, the problem with this image that is provided is that it doesn’t crop out black background My desired image should be like this Answer You could try this:
Tensorflow accuracy from model.predict does not match final epoch val_accuracy of model.fit
I am trying to match the accuracy of a model.predict call to the final val_accuracy of model.fit(). I am using tf dataset. The dataset setup for train_ds is similar. I prefetch both… Than I get the labels for the val_ds so I can use them later My model Compiles fine Seems to fit fine The last epoch output Epoch 10:
Tensorflow: tf.cond, how to return multi dim tensors instead of simple values?
When I run my code: I get: run(bool_tensor) : [False True False] ValueError: Shape must be rank 0 but is rank 1 for ‘cond/Switch’ (op: ‘Switch’) with input shapes: [3], [3]. But I want the second print to show a Tensor that evaluates to: [-999 999 -999] I have looked into other posts but could find a solution. Thank you
Is there any way to implement early stopping callback for Tensorflow 2 model_main_tf.py?
Hello I’m working on object detection using tensorflow 2 object detection API model_main_tf2.py file normally we can use early stopping callback for model.fit() when we use normally but when i tried to training with pipeline config model_main_tf2.py file and .config file I’m not able to implement it because I’m unable to locate model.fit() in the main file so please is
Input 0 is incompatible with layer model_2
i have a generator rev_generator that yields a tuple of two elements (numpyarray of shape (1279,300,1) , int value: 0 or 1) then i pass it to: and then a simple model but when i call fit it throws me an error: Answer If you are using the tf.data.Dataset API, you should set the batch size explicitly and not in
Dropping dimension of Tensor with size bigger then one tensor, in Tensorflow
First i splitted the original tensor and then after some operations i want to combine the tensor to the original shape and the original tensor, before splitting it. I’m not sure i can just use the old tensor with graph mode in tensorflow. Each dimension of the four dimension of tensor_a has at least a size of 2. Answer The