I have a model that I trained with For the embedding I use Glove as a pre-trained embedding dictionary. Where I first build the tokenizer and text sequence with: t = Tokenizer() t.fit_on_texts(all_text) and then I’m calculating the embedding matrix with: now I’m using a new dataset for the prediction. This leads to an error: Node: ‘model/synopsis_embedd/embedding_lookup’ indices[38666,63] = 136482
Tag: tensorflow
How can I print the training and validation graphs, and training and validation loss graphs?
I need to plot the training and validation graphs, and trarining and validation loss for my model. Answer history object contains both accuracy and loss for both the training as well as the validation set. We can use matplotlib to plot from that. In these plots x-axis is no_of_epochs and the y-axis is accuracy and loss value. Below is one
ImportError: dlopen(…): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
I am a beginner at machine learning. I try to use LSTM algorism but when I write from keras.models import Sequential it shows error as below: How can I fix this? Thank you so much! full error message: Answer Problem solved. install tensorflow again with and change the import to
Behavior of Dataset.map in Tensorflow
I’m trying to take variable length tensors and split them up into tensors of length 4, discarding any extra elements (if the length is not divisible by four). I’ve therefore written the following function: This produces the output [<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 2, 3, 4], dtype=int32)>], as expected. If I now run the same function using Dataset.map: I instead get
Transfer Learning with Quantization Aware Training using Functional API
I have a model that I am using transfer learning for MobileNetV2 and I’d like to quantize it and compare the accuracy difference against a non-quantized model with transfer learning. However, they do not entirely support recursive quantization, but according to this, this method should quantize my model: https://github.com/tensorflow/model-optimization/issues/377#issuecomment-820948555 What I tried doing was: It is still giving me the
Changing graph dataset matrices from sparse format to dense
I am trying to use the CoRA dataset to train a graph neural network on tensorflow for the first time. The features and adjacency matrices provided by the dataset comes in a sparse representation but I don’t need it here. Thus, I want to use numpy’s todense() but it turns out it doesn’t exist. For your reference, here is the
How to create a tensor from another tensor like tf.constant and number?
I want to use the value in a tensor to create another tensor, but I got the following error: How can I use the value in tensor a? Answer You can use tf.stack. Check function:
`torch.gather` without unbroadcasting
I have some batched input x of shape [batch, time, feature], and some batched indices i of shape [batch, new_time] which I want to gather into the time dim of x. As output of this operation I want a tensor y of shape [batch, new_time, feature] with values like this: In Tensorflow, I can accomplish this by using the batch_dims:
why does the mse loss had a sudden jump?
i’m working on a regression problem using neural network. the mse loss would decrease at the beginning of train and the accuracy is satisfactory, yet, when the train process goes on, the loss had a huge jump, and maintain at a certain value,like the curve in the picture. i don’t know why and how to fix it? and i wanna
Tensorflow – Dense and Convolutional layers connection
I’m new to Deep Learning and I can’t find anywhere how to do the bottleneck in my AE with convolutional and dense layers. The code below is the specific part where I’m struggling: I tried some solutions, like flatten and reshape, but nothing seems to work here. The point is that I need the latent space to be a dense