π Exercise M6.01#
The aim of this notebook is to investigate if we can tune the hyperparameters of a bagging regressor and evaluate the gain obtained.
We will load the California housing dataset and split it into a training and a testing set.
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0, test_size=0.5
)
Note
If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC.
Create a BaggingRegressor
and provide a DecisionTreeRegressor
to its
parameter estimator
. Train the regressor and evaluate its generalization
performance on the testing set using the mean absolute error.
# Write your code here.
Now, create a RandomizedSearchCV
instance using the previous model and tune
the important parameters of the bagging regressor. Find the best parameters
and check if you are able to find a set of parameters that improve the default
regressor still using the mean absolute error as a metric.
Tip
You can list the bagging regressorβs parameters using the get_params
method.
# Write your code here.