I wanted to compare the manual computations of the precision and recall with scikit-learn functions. However, recall_score()
and precision_score()
of scikit-learn functions gave me different results. Not sure why! Could you please give me some advice why I am getting different results? Thanks!
My confusion matrix:
JavaScript
x
3
1
tp, fn, fp, tn = confusion_matrix(y_test, y_test_pred).ravel()
2
print('Outcome values : n', tp, fn, fp, tn)
3
JavaScript
1
3
1
Outcome values :
2
3636933 34156 127 151
3
JavaScript
1
3
1
FDR=tp/(tp+fn) # TPR/Recall/Sensitivity
2
print('Recall: %.3f' % FDR)
3
JavaScript
1
2
1
Recall: 0.991
2
JavaScript
1
3
1
precision=tp/(tp + fp)
2
print('Precision: %.3f' % precision)
3
JavaScript
1
2
1
Precision: 1.000
2
JavaScript
1
5
1
precision = precision_score(y_test, y_test_pred)
2
print('Precision: %f' % precision)
3
recall = recall_score(y_test, y_test_pred)
4
print('Recall: %f' % recall)
5
JavaScript
1
3
1
Precision: 0.004401
2
Recall: 0.543165
3
Advertisement
Answer
It should be (check return value’s ordering):
JavaScript
1
2
1
tn, fp, fn, tp = confusion_matrix(y_test, y_test_pred).ravel()
2
Please refer: here