Skip to content
Advertisement

Tag: grid-search

Error while doing SVR for multiple outputs

Trying to do SVR for multiple outputs. Started by hyper-parameter tuning which worked for me. Now I want to create the model using the optimum parameters but I am getting an error. How to fix this? Output: Trying to create a model using the output: Error: Answer Please consult the MultiOutputRegressor docs. The regressor you got back is the model.

Tuning the hyperparameter with gridsearch results in overfitting

Tuning the hyperparameter with gridsearch results in overfitting. The train error is definitely low, but the test error is high. Can’t you adjust the hyperparameter to lower the test error? before tuning train_error: 0.386055, test_error: 0.674069 -after tuning train_error: 0.070645, test_error: 0.708254 Answer It all depends on the data you are training. If the data you are using for training

Random Forest tuning with RandomizedSearchCV

I have a few questions concerning Randomized grid search in a Random Forest Regression Model. My parameter grid looks like this: and my code for the RandomizedSearchCV like this: is there any way to calculate the Root mean square at each parameter set? This would be more interesting to me as the R^2 score? If I now want to get

GridSearchCV.best_score not same as cross_val_score(GridSearchCV.best_estimator_)

Consider the following gridsearch : grid = GridSearchCV(clf, parameters, n_jobs =-1, iid=True, cv =5) grid_fit = grid.fit(X_train1, y_train1) According to Sklearn’s ressource, grid_fit.best_score_ returns The mean cross-validated score of the best_estimator . To me that would mean that the average of : cross_val_score(grid_fit.best_estimator_, X_train1, y_train1, cv=5) should be exactly the same as: grid_fit.best_score_. However I am getting a 10% difference

Cross validation with grid search returns worse results than default

I’m using scikitlearn in Python to run some basic machine learning models. Using the built in GridSearchCV() function, I determined the “best” parameters for different techniques, yet many of these perform worse than the defaults. I include the default parameters as an option, so I’m surprised this would happen. For example: This is the same as the defaults, except max_depth

Advertisement