Skip to content
Advertisement

Tag: word2vec

Gensim Word2Vec exhausting iterable

I’m getting the following prompt when calling model.train() from gensim word2vec The only solutions I found on my search for an answer point to the itarable vs iterator difference, and at this point, I tried everything I could to solve this on my own, currently, my code looks like this: The corpus variable is a list containing sentences, and each

Word2Vec + LSTM Good Training and Validation but Poor on Test

currently I’am training my Word2Vec + LSTM for Twitter sentiment analysis. I use the pre-trained GoogleNewsVectorNegative300 word embedding. The reason I used the pre-trained GoogleNewsVectorNegative300 because the performance much worse when I trained my own Word2Vec using own dataset. The problem is why my training process had validation acc and loss stuck at 0.88 and 0.34 respectively. Then, my confussion

Retrieve n-grams with word2vec

I have a list of texts. I turn each text into a token list. For example if one of the texts is ‘I am studying word2vec’ the respective token list will be (assuming I consider n-grams with n = 1, 2, 3) [‘I’, ‘am’, ‘studying ‘, ‘word2vec, ‘I am’, ‘am studying’, ‘studying word2vec’, ‘I am studying’, ‘am studying word2vec’]. Is

Sort dictionary python by value (word2vec)

I want to sort my dict by value, but if I apply this code it doesn’t work (it print only my key-value pairs without any kind of sorting). If I change key=lambda x: x[1] to x[0] it correctly sort by key, so I don’t understand what I’m doing wrong. My code: Answer You’re trying to sort sets, and Python isn’t

Advertisement