So in this module, we discussed ensemble learners which are a type of learner that combines simpler learners together. We saw two strategies:
one based on bootstrap samples allowing learners to be fit in a parallel manner;
the other called boosting which fit learners sequentially.
From these two families, we mainly focused on giving intuitions regarding the internal machinery of the random forest and gradient-boosting models which are state-of-the-art methods.
To go further#
You can refer to the following scikit-learn examples which are related to the concepts approached in this module: