Cross validation with grid search returns worse results than default

I’m using scikitlearn in Python to run some basic machine learning models. Using the built in GridSearchCV() function, I determined the “best” parameters for different techniques, yet many of these perform worse than the defaults. I include the default parameters as an option, so I’m surprised this would happen. For example: This is the same as the defaults, except max_depth is 3. When I use these parameters, I get an accuracy of 72%, compared to 78% from the default. One thing I did, that I will admit is suspicious, is that I used my entire dataset for the cross validation.

list of all classification algorithms

I have a classification problem and I would like to test all the available algorithms to test their performance in tackling the problem. If you know any classification algorithm other than these listed below, please list it here. Your help is highly appreciated. Answer The answers did not provided the full list of classifiers so i have listed them below

What does the value of ‘leaf’ in the following xgboost model tree diagram means?

I am guessing that it is conditional probability given that the above (tree branch) condition exists. However, I am not clear on it. If you want to read more about the data used or how do we get this diagram then go to : http://machinelearningmastery.com/visualize-gradient-boosting-decision-trees-xgboost-python/ Answer Attribute leaf is the predicted value. In other words, if the evaluation of a tree model ends at that terminal node (aka leaf node), then this is the value that is returned. In pseudocode (the left-most branch of your tree model):