Skip to content

Tag: deep-learning

How does Tokenizer in tensorflow deal with out of vocabulary tokens if I don’t provide oov_token?

I didn’t get any error with that code even though I didn’t provide oov_token argument. I expected to get an error in test_tweets = tokenizer.texts_to_sequences(X_test) How does tensorflow deal with out of vocabulary words during the test time when you don’t provide the oov_token? Answer OOV words will be ignored / discarded by default, if oov_token is None:

Try to replace the nan values by pandas , but Error: Columns must be same length as key

It is a simple project in Kaggle, just imitating one blog, but failed. enter image description here train_inf[‘Age’]=train_inf.fillna(train_inf[‘Age’].median()) ValueError: Columns must be same length as key just this code I am searching for a long time on net. But no use. Please help or try to give some ideas how achieve this. Thanks in advance. Answer You are close, need

Difference between the calculation of the training loss and validation loss using pytorch

I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here: GeeksforGeeks-Training Neural Networks with Validation using Pytorch I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can’t get why the validation loss

Unable to convert tensorflow.python.framework.ops.Tensor object to numpy array for passoing it in sklearn.metrics.cohen_kappa_score function

I thought of implementing kappaScore metrics using sklearn.metrics.cohen_kappa_score Error I get when I try to run this code: Here the type of y_true and y_pred requires to be in list or numpy array But the type of y_true and y_pred are, When directly try to print it (i.e, without type() function), it shows like this: Unable to use y_true.numpy() (Convert

How to use tf.repeat() to replicate a specific column/row/slice?

this thread explains well the use of tf.repeat() as a tensorflow alternative to np.repeat(). one functionality which I was unable to figure out, in np.repeat(), a specific column/row/slice can be replicated by supplying the index. e.g. is there any tensorflow alternative to this functionality of np.repeat()? Answer You could use the repeats parameter of tf.repeat: where you get the first

TensorFlow TextVectorization producing Ragged Tensor with no padding after loading it from pickle

I have a TensorFlow TextVectorization layer named “eng_vectorization”: and I saved it in a pickle file, using this code: Then I load that pickle file properly as new_eng_vectorization: Now I am expecting, both previous vectorization eng_vectorization and newly loaded vectorization new_eng_vectorization to work the same, but they are not. The output of original vectorization, eng_vectorization([‘Hello people’]) is a Tensor: And