Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 years ago. Improve this question I generate text via transformer models and I am looking for a way of measuring the grammatical text-quality. Like the text:
Tag: evaluation
How to find accuracy, precision, recall, f1 score for my word2vec model?
I am working on a project to find similarity among products. The model splits the excel data sheet into 90% training / 10% validation. When I check manually for validation the model works pretty well. But I am having trouble with the evaluation process. How should I find accuracy, precision, recall and F1 score to understand how well my model