Ch02. Homogeneous Parallel Ensembles: Bagging and Random Forests

This chapter covers

  • Training homogeneous parallel ensembles
  • Implementing and understanding how Bagging works
  • Implementing and understanding how Random Forest works
  • Training variants with pasting, random subspaces, random patches and ExtraTrees
  • Using bagging and random forests in practice

To recap, an ensemble method relies on the notion of “wisdom of the crowd”: the combined answer of many diverse models is often better than any one individual answer.

Parallel ensemble methods, as the name suggests, train each component base estimator independently of the others, which means that they can be trained in parallel. As we will see, parallel ensemble methods can be further distinguished as homogeneous and heterogeneous parallel ensembles depending on the kind of learning algorithms they use.

The class of homogeneous parallel ensemble methods includes two popular machine-learning methods, one or both of which you might have come across and even used before: bagging and random forest.

Recall that the two key components of an ensemble method are: ensemble diversity and model aggregation. Since homogeneous ensemble methods use the same learning algorithm on the same data set, how can they generate a set of diverse base estimators? They do this through random sampling of either the training examples (as bagging does) or features (as some variants of bagging do) or both (as random forest does).

2.1 Parallel Ensembles

Figure 2.1 Dr. Randy Forrest’s diagnostic process is an analogy of a parallel ensemble method.

Recall Dr. Randy Forrest, our ensemble diagnostician from Chapter 1. Every time Dr. Forrest gets a new case he solicits the opinions of all his residents. He then decides the final diagnosis from among those proposed by his residents (Figure 2.1, top). Dr. Forrest’s diagnostic technique is successful because of two reasons.

  1. He has assembled a diverse set of residents, with different medical specializations, which means they think differently about each case. This is works out well for Dr. Forrest as it puts several different perspectives on the table for him to consider.
  2. He aggregates the independent opinions of his residents into one final diagnosis. Here, he is democratic and selects the majority opinion. However, he can also aggregate his residents’ opinions in other ways. For instance, he can weight the opinions of his more experienced residents higher. This reflects that he trusts some residents more than others, based on factors such as experience or skill, that mean they are right more often than other residents on the team.

Dr. Forrest and his residents are a parallel ensemble (Figure 2.1. bottom). Each resident in our example above is a component base estimator (or base learner) that we have to train. Base estimators can be trained using different base algorithms (leading to heterogeneous ensembles) or the same base algorithm (leading to homogeneous ensembles).

2.2 Bagging: Bootstrap Aggregating

Bagging, short for bootstrap aggregating, was introduced by Leo Breiman in 1996. The name refers to how bagging achieves ensemble diversity (through bootstrap sampling) and performs ensemble prediction (through model aggregating).

Bagging uses the same base machine-learning algorithm to train base estimators. So how can we get multiple base estimators from a single data set and a single learning algorithm, let alone diversity? By training base estimators on replicates of the data set.

Figure 2.2 Bagging, illustrated. Bagging uses bootstrap sampling to generate similar but not exactly identical subsets (observe the replicates above) from a single data set.

Bagging consists of two steps as illustrated in Figure 2.3:

  1. during training, bootstrap sampling, or sampling with replacement is used to generate replicates of the training data set that are different from each other but drawn from the original data set; this ensures that base learners trained on each of the replicates are also different from each other;
  2. during prediction, model aggregation is used to combine the predictions of the individual base learners into one ensemble prediction. For classification tasks, the final ensemble prediction is determined by majority voting, for regression tasks, by averaging.

2.2.1 Intuition: Resampling and Model Aggregation

Bootstrap Sampling: Sampling with Replacement

Figure 2.3 Bootstrap sampling illustrated on a data set of 6 examples.

When sampling with replacement, some objects that were already sampled have a chance to be sampled a second time (or even a third, or fourth, and so on) because they were replaced. In fact, some objects may be sampled many times, while some objects may never be sampled.

Thus, bootstrap sampling naturally partitions a data set into two sets: a bootstrap sample (with training examples that were sampled at least once) and an out-of-bag (oob) sample (with training examples that were never sampled even once).

The Out-of-Bag Sample

The oob sample is effectively a held-out set and can be used to evaluate the ensemble without the need for a separate validation set or even a cross-validation procedure.

The error estimate computed using out-of-bag instances is called the out-of-bag error or the oob score.

import numpy as np

np.random.seed(42)
bag = np.random.choice(range(0, 50), size=50, replace=True)
np.sort(bag)
array([ 1,  1,  2,  2,  3,  6,  7,  8,  8, 10, 10, 11, 13, 14, 14, 15, 17,
       18, 20, 20, 20, 21, 21, 22, 23, 23, 23, 24, 24, 25, 26, 27, 28, 29,
       32, 35, 36, 37, 38, 38, 38, 39, 41, 42, 43, 43, 43, 46, 48, 49])
oob = np.setdiff1d(range(0, 50), bag)
oob
array([ 0,  4,  5,  9, 12, 16, 19, 30, 31, 33, 34, 40, 44, 45, 47])

To summarize: after one round of bootstrap sampling, we get one bootstrap sample (for training a base estimator) and a corresponding oob sample (to evaluate that base estimator).
When we repeat this step many times, we will have trained several base estimators and will also have estimated their individual generalization performances through individual oob errors. The averaged oob error is a good estimate of the performance of the overall ensemble.

0.632 Bootstrap
When sampling with replacement, the bootstrap sample will contain roughly 63.2% of the data set, while the oob sample will contain the other 36.8% of the data set.
We can show this by computing the probabilities of a data point being sampled. If our data set has n training examples, the probability of picking one particular data point x in the bootstrap sample is 1 / n. The probability of not picking x in the bootstrap sample (that is, picking x in the oob sample) is 1 - ( 1 / n ).
For n data points, the overall probability of being selected in the oob sample is (1 − \frac{1}{n} )^n ≈ e^{−1} = 0.368 (for sufficiently large n).
Thus, each oob sample will contain (approximately) 36.8% of the training examples, and the corresponding bootstrap sample will contain (approximately) the remaining 63.2% of the instances.

Model Aggregation

For classification tasks, majority voting is used to aggregate predictions of individual base learners. The majority vote is also known as the statistical mode. The mode is simply the most frequently occurring element and is a statistic similar to the mean or the median.

We can think of model aggregation as averaging: it smooths out imperfections among the chorus and produces a single answer reflective of the majority. If we have a set of robust base estimators, model aggregation will smooth out mistakes made by individual estimators.

2.2.2 Implementing Bagging

We can implement our own version of bagging easily. Each base estimator in our bagging ensemble is trained independently using the following steps:

  1. Generate a bootstrap sample from the original data set
  2. Fit a base estimator to the bootstrap sample

Independently here means that the training stage of each individual base estimator takes place without consideration of what is going on with the other base estimators.

  • We use decision trees as base estimators;
    • the maximum depth can be set using the parameter max_depth.
  • We will need two other parameters:
    • n_estimators, the ensemble size and
    • max_samples, the size of the bootstrap subset, that is, the number of training examples to sample (with replacement) per estimator.
  • Our naïve implementation trains each base decision tree sequentially.
    • If it takes 10 seconds to train a single decision tree, and we are training an ensemble of 100 trees, it will take our implementation 10 sec. x 100 = 1000 seconds of total training time.
from sklearn.tree import DecisionTreeClassifier


def bagging_fit(X, y, n_estimators, max_depth=5, max_samples=200):
    n_examples = len(y)
    # Create a list of untrained base estimators
    estimators = [
        DecisionTreeClassifier(max_depth=max_depth)
        for _ in range(n_estimators)
    ]

    for tree in estimators:
        # Generate a bootstrap sample
        bag = np.random.choice(n_examples, max_samples, replace=True)
        # Fit a tree to the bootstrap sample
        tree.fit(X[bag, :], y[bag])

    return estimators
©著作权归作者所有,转载或内容合作请联系作者
禁止转载,如需转载请通过简信或评论联系作者。
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,240评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,328评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,182评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,121评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,135评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,093评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,013评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,854评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,295评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,513评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,678评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,398评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,989评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,636评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,801评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,657评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,558评论 2 352

推荐阅读更多精彩内容