Skip to content
Advertisement

How to find accuracy, precision, recall, f1 score for my word2vec model?

I am working on a project to find similarity among products. The model splits the excel data sheet into 90% training / 10% validation. When I check manually for validation the model works pretty well. But I am having trouble with the evaluation process. How should I find accuracy, precision, recall and F1 score to understand how well my model works?

I am very new to machine learning, still learning, please give me some clues where to start.

Advertisement

Answer

Word2vec is an algorithm that’s considered ‘unsupervised’ – it’s not trained using specified ‘correct’ answers, but rather learns from the patterns in any data. As a result, there’s no native-to-word2vec idea of ‘accuracy’, ‘precision’, etcetera – those concepts only have meaning in relation to a set of desired answers.

So to calculate those values, you have to use those word-vectors in some other downstream task, and devise your own evaluation for that downstream task. Then you can calculate accuracy & other values for that whole system (including the word2vec step). This may include applying your judgement, or that of other reviewers, about what the result “should” be in certain cases.

Without any examples of your data, it’s not yet clear what your Word2Vec model is doing, and how products are represented in it. (What’s the individual items in the customers_train list you’ve created? Where do product names/identifiers come in? What kinds of similarity-questions or end-user operations do you need to be performing?)

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement