I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here: GeeksforGeeks-Training Neural Networks with Validation using Pytorch I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can’t get why the validation loss
Tag: deep-learning
Incompatibility between input and final Dense Layer (Value Error)
I’m following this tutorial from Nabeel Ahmed to create your own emotion detector using Keras (I’m a noob) and I’ve found a strange behaviour that I’d like to understand. The input data is a bunch of 48×48 images, each one with an integer value between 0 and 6 (each number stands for an emotion label), which represents the emotion present
Unable to convert tensorflow.python.framework.ops.Tensor object to numpy array for passoing it in sklearn.metrics.cohen_kappa_score function
I thought of implementing kappaScore metrics using sklearn.metrics.cohen_kappa_score Error I get when I try to run this code: Here the type of y_true and y_pred requires to be in list or numpy array But the type of y_true and y_pred are, When directly try to print it (i.e, without type() function), it shows like this: Unable to use y_true.numpy() (Convert
How to use tf.repeat() to replicate a specific column/row/slice?
this thread explains well the use of tf.repeat() as a tensorflow alternative to np.repeat(). one functionality which I was unable to figure out, in np.repeat(), a specific column/row/slice can be replicated by supplying the index. e.g. is there any tensorflow alternative to this functionality of np.repeat()? Answer You could use the repeats parameter of tf.repeat: where you get the first
TensorFlow TextVectorization producing Ragged Tensor with no padding after loading it from pickle
I have a TensorFlow TextVectorization layer named “eng_vectorization”: and I saved it in a pickle file, using this code: Then I load that pickle file properly as new_eng_vectorization: Now I am expecting, both previous vectorization eng_vectorization and newly loaded vectorization new_eng_vectorization to work the same, but they are not. The output of original vectorization, eng_vectorization([‘Hello people’]) is a Tensor: And
Giving output of one neural network as an input to another in pytorch
I have a pretrained convolution neural network which produces and output of shape (X,164) where X is the number of test examples. So output layer has 164 nodes. I want to take this output and give this two another network which is simply a fully connected neural network whereby the first layer has 64 nodes and output layer has 1
ValueError: Dimensions must be equal, but are 96 and 256 in tpu on tensorflow
I am trying to create a mnist gan which will use tpu. I copied the gan code from here. Then i made some of my own modifications to run the code on tpu.for making changes i followed this tutorial which shows how to us tpu on tensorflow on tensorflow website. but thats not working and raising an error here is
Resize feature vector from neural network
I am trying to perform a task of approximation of two embeddings (textual and visual). For the visual embedding, I am using VGG as the encoder. The output is a 1×1000 embedding. For the textual encoder, I am using a Transformer to which output is shaped 1×712. What I want is to convert both these vectors to the same dimension
a bug for tf.keras.layers.TextVectorization when built from saved configs and weights
I have tried writing a python program to save tf.keras.layers.TextVectorization to disk and load it with the answer of How to save TextVectorization to disk in tensorflow?. The TextVectorization layer built from saved configs outputs a vector with wrong length when the arg output_sequence_length is not None and output_mode=’int’. For example, if I set output_sequence_length= 10, and output_mode=’int’, it is
TypeError: multiple values for argument ‘weight_decay’
I am using an AdamW optimizer that uses cosine decay with a warmup learning scheduler. I have written the custom scheduler from scratch and using the AdamW optimizer provided by the TensorFlow addons library. I get the following error prompt where it says that weight_decay has multiple arguments What is causing problem and how do I resolve this? Answer The