Hyperparameter tuning#

In the previous section, we did not discuss the parameters of random forest and gradient-boosting. However, there are a couple of things to keep in mind when setting these.

This notebook gives crucial information regarding how to set the hyperparameters of both random forest and gradient boosting decision tree models.

Caution

For the sake of clarity, no cross-validation will be used to estimate the variability of the testing error. We are only showing the effect of the parameters on the validation set of what should be the inner loop of a nested cross-validation.

We will start by loading the california housing dataset.

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100  # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
    data, target, random_state=0
)

Random forest#

The main parameter to select in random forest is the n_estimators parameter. In general, the more trees in the forest, the better the generalization performance will be. However, it will slow down the fitting and prediction time. The goal is to balance computing time and generalization performance when setting the number of estimators. Here, we fix n_estimators=100, which is already the default value.

Caution

Tuning the n_estimators for random forests generally result in a waste of computer power. We just need to ensure that it is large enough so that doubling its value does not lead to a significant improvement of the validation error.

Instead, we can tune the hyperparameter max_features, which controls the size of the random subset of features to consider when looking for the best split when growing the trees: smaller values for max_features will lead to more random trees with hopefully more uncorrelated prediction errors. However if max_features is too small, predictions can be too random, even after averaging with the trees in the ensemble.

If max_features is set to None, then this is equivalent to setting max_features=n_features which means that the only source of randomness in the random forest is the bagging procedure.

print(f"In this case, n_features={len(data.columns)}")
In this case, n_features=8

We can also tune the different parameters that control the depth of each tree in the forest. Two parameters are important for this: max_depth and max_leaf_nodes. They differ in the way they control the tree structure. Indeed, max_depth will enforce to have a more symmetric tree, while max_leaf_nodes does not impose such constraint. If max_leaf_nodes=None then the number of leaf nodes is unlimited.

The hyperparameter min_samples_leaf controls the minimum number of samples required to be at a leaf node. This means that a split point (at any depth) is only done if it leaves at least min_samples_leaf training samples in each of the left and right branches. A small value for min_samples_leaf means that some samples can become isolated when a tree is deep, promoting overfitting. A large value would prevent deep trees, which can lead to underfitting.

Be aware that with random forest, trees are expected to be deep since we are seeking to overfit each tree on each bootstrap sample. Overfitting is mitigated when combining the trees altogether, whereas assembling underfitted trees (i.e. shallow trees) might also lead to an underfitted forest.

import pandas as pd
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestRegressor

param_distributions = {
    "max_features": [1, 2, 3, 5, None],
    "max_leaf_nodes": [10, 100, 1000, None],
    "min_samples_leaf": [1, 2, 5, 10, 20, 50, 100],
}
search_cv = RandomizedSearchCV(
    RandomForestRegressor(n_jobs=2),
    param_distributions=param_distributions,
    scoring="neg_mean_absolute_error",
    n_iter=10,
    random_state=0,
    n_jobs=2,
)
search_cv.fit(data_train, target_train)

columns = [f"param_{name}" for name in param_distributions.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search_cv.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
param_max_features param_max_leaf_nodes param_min_samples_leaf mean_test_error std_test_error
3 2 None 2 33.994711 0.538897
0 2 1000 10 36.828591 0.432680
7 None None 20 37.321178 0.441737
4 5 100 2 40.041925 0.647538
8 None 100 10 40.364790 0.743403
6 None 1000 50 40.744152 0.443388
9 1 100 2 49.719213 0.670644
2 1 100 1 50.106131 0.638342
5 1 None 100 54.807478 0.910029
1 3 10 10 54.932381 0.846035

We can observe in our search that we are required to have a large number of max_leaf_nodes and thus deep trees. This parameter seems particularly impactful with respect to the other tuning parameters, but large values of min_samples_leaf seem to reduce the performance of the model.

In practice, more iterations of random search would be necessary to precisely assert the role of each parameters. Using n_iter=10 is good enough to quickly inspect the hyperparameter combinations that yield models that work well enough without spending too much computational resources. Feel free to try more interations on your own.

Once the RandomizedSearchCV has found the best set of hyperparameters, it uses them to refit the model using the full training set. To estimate the generalization performance of the best model it suffices to call .score on the unseen data.

error = -search_cv.score(data_test, target_test)
print(
    f"On average, our random forest regressor makes an error of {error:.2f} k$"
)
On average, our random forest regressor makes an error of 33.97 k$

Gradient-boosting decision trees#

For gradient-boosting, parameters are coupled, so we cannot set the parameters one after the other anymore. The important parameters are n_estimators, learning_rate, and max_depth or max_leaf_nodes (as previously discussed random forest).

Let’s first discuss the max_depth (or max_leaf_nodes) parameter. We saw in the section on gradient-boosting that the algorithm fits the error of the previous tree in the ensemble. Thus, fitting fully grown trees would be detrimental. Indeed, the first tree of the ensemble would perfectly fit (overfit) the data and thus no subsequent tree would be required, since there would be no residuals. Therefore, the tree used in gradient-boosting should have a low depth, typically between 3 to 8 levels, or few leaves (\(2^3=8\) to \(2^8=256\)). Having very weak learners at each step will help reducing overfitting.

With this consideration in mind, the deeper the trees, the faster the residuals will be corrected and less learners are required. Therefore, n_estimators should be increased if max_depth is lower.

Finally, we have overlooked the impact of the learning_rate parameter until now. When fitting the residuals, we would like the tree to try to correct all possible errors or only a fraction of them. The learning-rate allows you to control this behaviour. A small learning-rate value would only correct the residuals of very few samples. If a large learning-rate is set (e.g., 1), we would fit the residuals of all samples. So, with a very low learning-rate, we will need more estimators to correct the overall error. However, a too large learning-rate tends to obtain an overfitted ensemble, similar to having a too large tree depth.

from scipy.stats import loguniform
from sklearn.ensemble import GradientBoostingRegressor

param_distributions = {
    "n_estimators": [1, 2, 5, 10, 20, 50, 100, 200, 500],
    "max_leaf_nodes": [2, 5, 10, 20, 50, 100],
    "learning_rate": loguniform(0.01, 1),
}
search_cv = RandomizedSearchCV(
    GradientBoostingRegressor(),
    param_distributions=param_distributions,
    scoring="neg_mean_absolute_error",
    n_iter=20,
    random_state=0,
    n_jobs=2,
)
search_cv.fit(data_train, target_train)

columns = [f"param_{name}" for name in param_distributions.keys()]
columns += ["mean_test_error", "std_test_error"]
cv_results = pd.DataFrame(search_cv.cv_results_)
cv_results["mean_test_error"] = -cv_results["mean_test_score"]
cv_results["std_test_error"] = cv_results["std_test_score"]
cv_results[columns].sort_values(by="mean_test_error")
param_n_estimators param_max_leaf_nodes param_learning_rate mean_test_error std_test_error
1 200 20 0.160519 33.905197 0.433738
12 200 50 0.110585 34.793012 0.292925
17 500 5 0.771785 34.822201 0.517495
10 200 20 0.109889 35.004927 0.376778
6 500 100 0.709894 35.465560 0.306254
18 10 5 0.637819 42.535730 0.338066
3 500 2 0.07502 43.457866 0.704599
4 100 5 0.0351 46.558900 0.578629
19 5 20 0.202432 61.387176 0.610988
8 5 2 0.462636 65.114017 0.846987
9 10 5 0.088556 66.243538 0.720131
15 50 100 0.010904 71.847050 0.683326
5 2 2 0.421054 74.384704 0.791104
2 5 100 0.070357 77.007841 0.789595
16 2 50 0.167568 77.131005 0.850380
11 1 5 0.190477 82.819015 0.976351
13 5 20 0.033815 83.765509 0.974672
0 1 100 0.125207 85.363288 1.040982
14 1 10 0.081715 87.373374 1.071555
7 1 20 0.014937 90.531295 1.113892

Caution

Here, we tune the n_estimators but be aware that is better to use early_stopping as done in the Exercise M6.04.

In this search, we see that the learning_rate is required to be large enough, i.e. > 0.1. We also observe that for the best ranked models, having a smaller learning_rate, will require more trees or a larger number of leaves for each tree. However, it is particularly difficult to draw more detailed conclusions since the best value of an hyperparameter depends on the other hyperparameter values.

Now we estimate the generalization performance of the best model using the test set.

error = -search_cv.score(data_test, target_test)
print(f"On average, our GBDT regressor makes an error of {error:.2f} k$")
On average, our GBDT regressor makes an error of 33.00 k$

The mean test score in the held-out test set is slightly better than the score of the best model. The reason is that the final model is refitted on the whole training set and therefore, on more data than the cross-validated models of the grid search procedure.