Im creating a model using optuna lightgbm integration, My training set has some categorical features and i pass those features to the model using the lgb.Dataset class, here is the code im using ( NOTE: X_train, X_val, y_train, y_val are all pandas dataframes ). Every time the lgb.train function is called, i get the following user warning I believe that
Tag: lightgbm
`sklearn` asking for eval dataset when there is one
I am working on Stacking Regressor from sklearn and I used lightgbm to train my model. My lightgbm model has an early stopping option and I have used eval dataset and metric for this. When it feeds into the StackingRegressor, I saw this error ValueError: For early stopping, at least one dataset and eval metric is required for evaluation Which
LightGBM does not accept the dtypes of my data
I’m trying to use LGBMClassifier and for some reason, he does not accept the types of my data (all features are not accepted, I tested it). When we look at my data we can clearly see that all dtypes are either category, float or int (pd.DataFrame.info()) When I eventually try to train my LGBMClassifier I get the follwoing Error: Has
Python: How to retrive the best model from Optuna LightGBM study?
I would like to get the best model to use later in the notebook to predict using a different test batch. reproducible example (taken from Optuna Github) : my understanding is that the study below will tune for accuracy. I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a
Get LightGBM/ LGBM run with GPU on Google Colabratory
I often run LGBM on Google Colabratory and I just found out this page saying that LGBM it set to CPU by default so you need to set up first. https://medium.com/@am.sharma/lgbm-on-colab-with-gpu-c1c09e83f2af So I executed the code recommended on the page or some other codes recommended on stackoverflow as follows, !git clone –recursive https://github.com/Microsoft/LightGBM %cd LightGBM !mkdir build %cd build !cmake
LightGBMError “Check failed: num_data > 0” with Sklearn RandomizedSearchCV
I’m trying LightGBMRegressor parameter tuning with Sklearn RandomizedSearchCV. I got an error with message below. error: I cannot tell why and the specific parameters caused this error. Any of params_dist below was not suitable for train_x.shape:(1630, 1565)? Please tell me any hints or solutions. Thank you. LightGBM version: ‘2.0.12’ function caused this error: Too long to put full stack trace,