πŸ“ƒ Solution for Exercise M6.01

πŸ“ƒ Solution for Exercise M6.01#

The aim of this notebook is to investigate if we can tune the hyperparameters of a bagging regressor and evaluate the gain obtained.

We will load the California housing dataset and split it into a training and a testing set.

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100  # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
    data, target, random_state=0, test_size=0.5
)

Note

If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC.

Create a BaggingRegressor and provide a DecisionTreeRegressor to its parameter estimator. Train the regressor and evaluate its generalization performance on the testing set using the mean absolute error.

# solution
from sklearn.metrics import mean_absolute_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor

tree = DecisionTreeRegressor()
bagging = BaggingRegressor(estimator=tree, n_jobs=2)
bagging.fit(data_train, target_train)
target_predicted = bagging.predict(data_test)
print(
    "Basic mean absolute error of the bagging regressor:\n"
    f"{mean_absolute_error(target_test, target_predicted):.2f} k$"
)
Basic mean absolute error of the bagging regressor:
36.21 k$

Now, create a RandomizedSearchCV instance using the previous model and tune the important parameters of the bagging regressor. Find the best parameters and check if you are able to find a set of parameters that improve the default regressor still using the mean absolute error as a metric.

Tip

You can list the bagging regressor’s parameters using the get_params method.

# solution
for param in bagging.get_params().keys():
    print(param)
bootstrap
bootstrap_features
estimator__ccp_alpha
estimator__criterion
estimator__max_depth
estimator__max_features
estimator__max_leaf_nodes
estimator__min_impurity_decrease
estimator__min_samples_leaf
estimator__min_samples_split
estimator__min_weight_fraction_leaf
estimator__monotonic_cst
estimator__random_state
estimator__splitter
estimator
max_features
max_samples
n_estimators
n_jobs
oob_score
random_state
verbose
warm_start
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV

param_grid = {
    "n_estimators": randint(10, 30),
    "max_samples": [0.5, 0.8, 1.0],
    "max_features": [0.5, 0.8, 1.0],
    "estimator__max_depth": randint(3, 10),
}
search = RandomizedSearchCV(
    bagging, param_grid, n_iter=20, scoring="neg_mean_absolute_error"
)
_ = search.fit(data_train, target_train)
import pandas as pd

columns = [f"param_{name}" for name in param_grid.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
param_n_estimators param_max_samples param_max_features param_estimator__max_depth mean_test_error std_test_error
9 29 0.8 0.8 9 39.027908 0.900353
4 11 0.8 1.0 8 41.262699 0.916873
18 10 0.8 1.0 7 43.096963 1.181136
0 24 0.8 0.8 6 45.056855 0.880207
11 22 0.5 0.5 9 45.564202 1.890224
12 24 0.5 0.5 8 45.755153 1.189748
16 12 0.5 0.5 9 47.211473 2.829229
14 22 0.8 0.5 7 47.449320 1.999803
3 12 1.0 0.5 9 47.823189 2.706381
5 13 0.8 0.8 5 48.651834 1.263648
19 29 0.5 0.5 7 48.792582 1.521836
7 12 1.0 0.8 5 48.794662 1.686286
1 18 0.5 0.8 5 48.827834 1.222117
10 27 0.8 0.5 6 49.666456 1.711296
15 24 0.8 0.5 6 50.172054 2.245384
2 15 0.5 0.8 4 52.027693 1.074525
6 20 0.8 0.8 4 52.547914 0.426858
13 14 0.5 0.5 4 56.471330 2.282587
17 12 1.0 0.5 3 60.396124 2.053090
8 10 1.0 0.5 3 62.021787 1.771020
target_predicted = search.predict(data_test)
print(
    "Mean absolute error after tuning of the bagging regressor:\n"
    f"{mean_absolute_error(target_test, target_predicted):.2f} k$"
)
Mean absolute error after tuning of the bagging regressor:
37.91 k$

We see that the predictor provided by the bagging regressor does not need much hyperparameter tuning compared to a single decision tree.