currently I’am training my Word2Vec + LSTM for Twitter sentiment analysis. I use the pre-trained GoogleNewsVectorNegative300 word embedding. The reason I used the pre-trained GoogleNewsVectorNegative300 because the performance much worse when I trained my own Word2Vec using own dataset. The problem is why my training process had validation acc and loss stuck at 0.88 and 0.34 respectively. Then, my confussion
I have a list of texts. I turn each text into a token list. For example if one of the texts is ‘I am studying word2vec’ the respective token list will be (assuming I consider n-grams with n = 1, 2, 3) [‘I’, ‘am’, ‘studying ‘, ‘word2vec, ‘I am’, ‘am studying’, ‘studying word2vec’, ‘I am studying’, ‘am studying word2vec’]. Is
I want to sort my dict by value, but if I apply this code it doesn’t work (it print only my key-value pairs without any kind of sorting). If I change key=lambda x: x to x it correctly sort by …
I am working on a project to find similarity among products. The model splits the excel data sheet into 90% training / 10% validation. When I check manually for validation the model works pretty well. But I am having trouble with the evaluation process. How should I find accuracy, precision, recall and F1 score to […]
I’m new to Tensorflow I’m running a Deep learning Assignment from Udacity on iPython notebook. link And it has an error. Please help! How can I fix this? Thank you. Answer In older versions, it was called tf.initialize_all_variables.
I have calculated a word2vec model using python and gensim in my corpus. Then I calculated the mean word2vec vector for each sentence (averaging all the vectors for all the words in the sentence) and stored it in a pandas data frame. The columns of the pandas data frame df are: sentence Book title (the book where the sentence comes