πŸ“ƒ Solution for Exercise M6.01

πŸ“ƒ Solution for Exercise M6.01#

The aim of this notebook is to investigate if we can tune the hyperparameters of a bagging regressor and evaluate the gain obtained.

We will load the California housing dataset and split it into a training and a testing set.

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100  # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
    data, target, random_state=0, test_size=0.5
)

Note

If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC.

Create a BaggingRegressor and provide a DecisionTreeRegressor to its parameter estimator. Train the regressor and evaluate its generalization performance on the testing set using the mean absolute error.

# solution
from sklearn.metrics import mean_absolute_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor

tree = DecisionTreeRegressor()
bagging = BaggingRegressor(estimator=tree, n_jobs=2)
bagging.fit(data_train, target_train)
target_predicted = bagging.predict(data_test)
print(
    "Basic mean absolute error of the bagging regressor:\n"
    f"{mean_absolute_error(target_test, target_predicted):.2f} k$"
)
Basic mean absolute error of the bagging regressor:
36.89 k$

Now, create a RandomizedSearchCV instance using the previous model and tune the important parameters of the bagging regressor. Find the best parameters and check if you are able to find a set of parameters that improve the default regressor still using the mean absolute error as a metric.

Tip

You can list the bagging regressor’s parameters using the get_params method.

# solution
for param in bagging.get_params().keys():
    print(param)
bootstrap
bootstrap_features
estimator__ccp_alpha
estimator__criterion
estimator__max_depth
estimator__max_features
estimator__max_leaf_nodes
estimator__min_impurity_decrease
estimator__min_samples_leaf
estimator__min_samples_split
estimator__min_weight_fraction_leaf
estimator__monotonic_cst
estimator__random_state
estimator__splitter
estimator
max_features
max_samples
n_estimators
n_jobs
oob_score
random_state
verbose
warm_start
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV

param_grid = {
    "n_estimators": randint(10, 30),
    "max_samples": [0.5, 0.8, 1.0],
    "max_features": [0.5, 0.8, 1.0],
    "estimator__max_depth": randint(3, 10),
}
search = RandomizedSearchCV(
    bagging, param_grid, n_iter=20, scoring="neg_mean_absolute_error"
)
_ = search.fit(data_train, target_train)
import pandas as pd

columns = [f"param_{name}" for name in param_grid.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
param_n_estimators param_max_samples param_max_features param_estimator__max_depth mean_test_error std_test_error
6 26 0.8 0.8 9 38.744822 0.820730
1 25 1.0 1.0 9 39.199503 1.262572
16 17 0.8 1.0 9 39.516545 1.206744
11 16 0.5 0.8 9 40.312757 1.155289
14 18 0.5 0.8 7 42.267199 0.964222
17 12 1.0 1.0 6 45.278605 1.337975
0 22 1.0 0.5 8 45.590140 1.924398
12 10 1.0 1.0 6 45.612139 1.437821
19 20 1.0 0.5 9 46.228952 2.377679
9 21 0.5 1.0 5 47.843261 1.139545
5 12 0.8 0.5 8 48.053938 2.231738
4 18 0.5 0.5 6 50.683869 2.880660
2 17 0.8 0.8 4 52.376981 1.506407
7 21 1.0 0.8 4 52.738827 0.935594
10 23 1.0 0.5 4 55.681169 1.181115
18 23 1.0 1.0 3 56.662008 1.020801
8 12 0.8 1.0 3 57.160400 1.155378
13 27 0.5 0.8 3 58.060659 1.051121
15 26 1.0 0.5 3 60.376695 1.263127
3 11 0.5 0.5 3 60.706643 1.903773
target_predicted = search.predict(data_test)
print(
    "Mean absolute error after tuning of the bagging regressor:\n"
    f"{mean_absolute_error(target_test, target_predicted):.2f} k$"
)
Mean absolute error after tuning of the bagging regressor:
38.07 k$

We see that the predictor provided by the bagging regressor does not need much hyperparameter tuning compared to a single decision tree.