Skip to content
Advertisement

Tag: tensorflow

Prediction with keras embedding leads to indices not in list

I have a model that I trained with For the embedding I use Glove as a pre-trained embedding dictionary. Where I first build the tokenizer and text sequence with: t = Tokenizer() t.fit_on_texts(all_text) and then I’m calculating the embedding matrix with: now I’m using a new dataset for the prediction. This leads to an error: Node: ‘model/synopsis_embedd/embedding_lookup’ indices[38666,63] = 136482

How can I print the training and validation graphs, and training and validation loss graphs?

I need to plot the training and validation graphs, and trarining and validation loss for my model. Answer history object contains both accuracy and loss for both the training as well as the validation set. We can use matplotlib to plot from that. In these plots x-axis is no_of_epochs and the y-axis is accuracy and loss value. Below is one

Behavior of Dataset.map in Tensorflow

I’m trying to take variable length tensors and split them up into tensors of length 4, discarding any extra elements (if the length is not divisible by four). I’ve therefore written the following function: This produces the output [<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 2, 3, 4], dtype=int32)>], as expected. If I now run the same function using Dataset.map: I instead get

Transfer Learning with Quantization Aware Training using Functional API

I have a model that I am using transfer learning for MobileNetV2 and I’d like to quantize it and compare the accuracy difference against a non-quantized model with transfer learning. However, they do not entirely support recursive quantization, but according to this, this method should quantize my model: https://github.com/tensorflow/model-optimization/issues/377#issuecomment-820948555 What I tried doing was: It is still giving me the

`torch.gather` without unbroadcasting

I have some batched input x of shape [batch, time, feature], and some batched indices i of shape [batch, new_time] which I want to gather into the time dim of x. As output of this operation I want a tensor y of shape [batch, new_time, feature] with values like this: In Tensorflow, I can accomplish this by using the batch_dims:

why does the mse loss had a sudden jump?

i’m working on a regression problem using neural network. the mse loss would decrease at the beginning of train and the accuracy is satisfactory, yet, when the train process goes on, the loss had a huge jump, and maintain at a certain value,like the curve in the picture. i don’t know why and how to fix it? and i wanna

Advertisement