I’m currently using sklearn’s Ridge classifier, and am looking to ensemble this classifier with classifiers from sklearn and other libraries. In order to do this, it would be ideal to extract the probability that a given input belongs to each class in a list of classes. Currently, I’m zipping the classes with the output of model.decision_function(x), but this returns the distance from the hyperplane as opposed to a straightforward probability. These distance values vary from around -1 to around 1.
distances = dict(zip(clf.classes_, clf.decision_function(x)[0]))
How can I convert these distances to a more concrete set of probabilities (a series of positive values that sum to 1)? I’m looking for something like clf.predict_proba()
that is implemented for the SVC in sklearn.
Advertisement
Answer
Further exploration lead to using the softmax function.
d = clf.decision_function(x)[0] probs = np.exp(d) / np.sum(np.exp(d))
This guarantees a 0-1 bounded distribution that sums to 1.