# 📃 Solution for Exercise M6.01#

The aim of this notebook is to investigate if we can tune the hyperparameters of a bagging regressor and evaluate the gain obtained.

We will load the California housing dataset and split it into a training and a testing set.

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100  # rescale the target in k$data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0, test_size=0.5)  Note If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC. Create a BaggingRegressor and provide a DecisionTreeRegressor to its parameter base_estimator. Train the regressor and evaluate its generalization performance on the testing set using the mean absolute error. # solution from sklearn.metrics import mean_absolute_error from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import BaggingRegressor tree = DecisionTreeRegressor() bagging = BaggingRegressor(base_estimator=tree, n_jobs=2) bagging.fit(data_train, target_train) target_predicted = bagging.predict(data_test) print(f"Basic mean absolute error of the bagging regressor:\n" f"{mean_absolute_error(target_test, target_predicted):.2f} k$")

Basic mean absolute error of the bagging regressor:
36.65 k$ Now, create a RandomizedSearchCV instance using the previous model and tune the important parameters of the bagging regressor. Find the best parameters and check if you are able to find a set of parameters that improve the default regressor still using the mean absolute error as a metric. Tip You can list the bagging regressor’s parameters using the get_params method. # solution for param in bagging.get_params().keys(): print(param)  base_estimator__ccp_alpha base_estimator__criterion base_estimator__max_depth base_estimator__max_features base_estimator__max_leaf_nodes base_estimator__min_impurity_decrease base_estimator__min_samples_leaf base_estimator__min_samples_split base_estimator__min_weight_fraction_leaf base_estimator__random_state base_estimator__splitter base_estimator bootstrap bootstrap_features max_features max_samples n_estimators n_jobs oob_score random_state verbose warm_start  from scipy.stats import randint from sklearn.model_selection import RandomizedSearchCV param_grid = { "n_estimators": randint(10, 30), "max_samples": [0.5, 0.8, 1.0], "max_features": [0.5, 0.8, 1.0], "base_estimator__max_depth": randint(3, 10), } search = RandomizedSearchCV( bagging, param_grid, n_iter=20, scoring="neg_mean_absolute_error" ) _ = search.fit(data_train, target_train)  import pandas as pd columns = [f"param_{name}" for name in param_grid.keys()] columns += ["mean_test_error", "std_test_error"] cv_results = pd.DataFrame(search.cv_results_) cv_results["mean_test_error"] = -cv_results["mean_test_score"] cv_results["std_test_error"] = cv_results["std_test_score"] cv_results[columns].sort_values(by="mean_test_error")  param_n_estimators param_max_samples param_max_features param_base_estimator__max_depth mean_test_error std_test_error 6 24 1.0 0.8 9 38.821404 0.993483 9 28 0.8 0.8 9 38.977671 0.802114 14 22 0.8 1.0 8 41.045770 1.433283 11 26 0.8 0.8 8 41.060876 0.826254 5 17 0.5 1.0 8 41.342266 0.808588 12 12 0.5 1.0 7 43.365408 1.095448 4 28 0.8 0.5 8 45.001969 1.651051 2 18 1.0 0.8 6 45.355027 0.969291 17 10 0.5 0.5 9 46.902308 3.349448 3 29 1.0 1.0 5 48.062688 1.185177 8 24 0.8 1.0 5 48.071241 0.992439 16 25 0.8 1.0 5 48.203451 1.016045 19 20 0.8 0.5 6 50.212784 1.839077 13 19 0.5 0.5 5 51.245127 0.973437 15 13 1.0 0.5 6 51.268143 1.866806 7 12 0.5 1.0 4 51.662910 0.711680 10 29 1.0 0.8 4 51.839849 0.692287 18 10 0.5 0.8 3 56.935565 0.925057 0 19 1.0 0.5 3 60.290096 2.405193 1 11 0.8 0.5 3 61.957899 1.831614 target_predicted = search.predict(data_test) print(f"Mean absolute error after tuning of the bagging regressor:\n" f"{mean_absolute_error(target_test, target_predicted):.2f} k$")

Mean absolute error after tuning of the bagging regressor:
39.43 k\$


We see that the predictor provided by the bagging regressor does not need much hyperparameter tuning compared to a single decision tree.