Skip to content
Advertisement

Tag: deep-learning

Difference between the calculation of the training loss and validation loss using pytorch

I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here: GeeksforGeeks-Training Neural Networks with Validation using Pytorch I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can’t get why the validation loss

Unable to convert tensorflow.python.framework.ops.Tensor object to numpy array for passoing it in sklearn.metrics.cohen_kappa_score function

I thought of implementing kappaScore metrics using sklearn.metrics.cohen_kappa_score Error I get when I try to run this code: Here the type of y_true and y_pred requires to be in list or numpy array But the type of y_true and y_pred are, When directly try to print it (i.e, without type() function), it shows like this: Unable to use y_true.numpy() (Convert

How to use tf.repeat() to replicate a specific column/row/slice?

this thread explains well the use of tf.repeat() as a tensorflow alternative to np.repeat(). one functionality which I was unable to figure out, in np.repeat(), a specific column/row/slice can be replicated by supplying the index. e.g. is there any tensorflow alternative to this functionality of np.repeat()? Answer You could use the repeats parameter of tf.repeat: where you get the first

TensorFlow TextVectorization producing Ragged Tensor with no padding after loading it from pickle

I have a TensorFlow TextVectorization layer named “eng_vectorization”: and I saved it in a pickle file, using this code: Then I load that pickle file properly as new_eng_vectorization: Now I am expecting, both previous vectorization eng_vectorization and newly loaded vectorization new_eng_vectorization to work the same, but they are not. The output of original vectorization, eng_vectorization([‘Hello people’]) is a Tensor: And

Resize feature vector from neural network

I am trying to perform a task of approximation of two embeddings (textual and visual). For the visual embedding, I am using VGG as the encoder. The output is a 1×1000 embedding. For the textual encoder, I am using a Transformer to which output is shaped 1×712. What I want is to convert both these vectors to the same dimension

a bug for tf.keras.layers.TextVectorization when built from saved configs and weights

I have tried writing a python program to save tf.keras.layers.TextVectorization to disk and load it with the answer of How to save TextVectorization to disk in tensorflow?. The TextVectorization layer built from saved configs outputs a vector with wrong length when the arg output_sequence_length is not None and output_mode=’int’. For example, if I set output_sequence_length= 10, and output_mode=’int’, it is

Advertisement