Skip to content
Advertisement

Tag: word-embedding

Prediction with keras embedding leads to indices not in list

I have a model that I trained with For the embedding I use Glove as a pre-trained embedding dictionary. Where I first build the tokenizer and text sequence with: t = Tokenizer() t.fit_on_texts(all_text) and then I’m calculating the embedding matrix with: now I’m using a new dataset for the prediction. This leads to an error: Node: ‘model/synopsis_embedd/embedding_lookup’ indices[38666,63] = 136482

How to get feature names for a glove vectors

Countvectorizer has feature names, like this. What would be the feature names for a glove vector? How to get those feature names? I have the glove vector file of 300 dimensions like the above shown. What would be the name of the 300 dimensions of the glove vectors? Answer There is no name for the Glove features. The countvectorizer counts

Advertisement