I would like to get the best model to use later in the notebook to predict using a different test batch.
reproducible example (taken from Optuna Github) :
import lightgbm as lgb import numpy as np import sklearn.datasets import sklearn.metrics from sklearn.model_selection import train_test_split import optuna # FYI: Objective functions can take additional arguments # (https://optuna.readthedocs.io/en/stable/faq.html#objective-func-additional-args). def objective(trial): data, target = sklearn.datasets.load_breast_cancer(return_X_y=True) train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25) dtrain = lgb.Dataset(train_x, label=train_y) dvalid = lgb.Dataset(valid_x, label=valid_y) param = { "objective": "binary", "metric": "auc", "verbosity": -1, "boosting_type": "gbdt", "lambda_l1": trial.suggest_loguniform("lambda_l1", 1e-8, 10.0), "lambda_l2": trial.suggest_loguniform("lambda_l2", 1e-8, 10.0), "num_leaves": trial.suggest_int("num_leaves", 2, 256), "feature_fraction": trial.suggest_uniform("feature_fraction", 0.4, 1.0), "bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.4, 1.0), "bagging_freq": trial.suggest_int("bagging_freq", 1, 7), "min_child_samples": trial.suggest_int("min_child_samples", 5, 100), } # Add a callback for pruning. pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc") gbm = lgb.train( param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback] ) preds = gbm.predict(valid_x) pred_labels = np.rint(preds) accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels) return accuracy
my understanding is that the study below will tune for accuracy. I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle, I just want to use the model somewhere else in my notebook.
if __name__ == "__main__": study = optuna.create_study( pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize" ) study.optimize(objective, n_trials=100) print("Best trial:") trial = study.best_trial print(" Params: ") for key, value in trial.params.items(): print(" {}: {}".format(key, value))
desired output would be
best_model = ~model from above~ new_target_pred = best_model.predict(new_data_test) metrics.accuracy_score(new_target_test, new__target_pred)
Advertisement
Answer
I think you can use the callback
argument of Study.optimize
to save the best model. In the following code example, the callback checks if a given trial is corresponding to the best trial and saves the model as a global variable best_booster
.
best_booster = None gbm = None def objective(trial): global gbm # ... def callback(study, trial): global best_booster if study.best_trial == trial: best_booster = gbm if __name__ == "__main__": study = optuna.create_study( pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize" ) study.optimize(objective, n_trials=100, callbacks=[callback])
If you define your objective function as a class, you can remove the global variables. I created a notebook as a code example. Please take a look at it: https://colab.research.google.com/drive/1ssjXp74bJ8bCAbvXFOC4EIycBto_ONp_?usp=sharing
I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle
FYI, if you can pickle the boosters, I think you can make the code simple by following this FAQ.