Chapter 3 - Splitting datasets one feature at a time: decision trees

Tree construction

Decision trees
Pros: Computationally cheap to use, easy for humans to understand learned results, missing values OK, can deal with irrelevant features
Cons: Prone to overfitting
Works with: Numeric values, nominal values

Before we write the function createBranch() in Python, we need to split the dataset. If we split on an attribute and it has four possible values, then we'll split the data four ways and create four separate branches. We'll follow the ID3 algorithm, which tells us how to split the data and when to stop splitting it.

Information gain

We choose to split our dataset in a way that makes our unorganized data more organized. One way to do this is to measure the information. Using information theory, you can measure the information before and after the split.

The change in the information before and after the split is known as the information gain. We can split the dataset across every feature to see which split gives the highest information gain. The split with the highest information gain is the best option. The measure of information of a set is known as the Shannon entropy, or just entropy.

Entropy is defined as the expected value of the information. If you're classifying something that can take on multiple values, the information for symbol xi is defined as l(xi) = log2p(xi), where p(xi) is the probability of choosing this class.

When calculating entropy, you need the expected value of all the information of all possible values of our class. This is given by H = - sum(p(xi)log2p(xi)).

  • Function calc_shannon_ent()

    def calc_shannon_ent(data_set):
        num_entries = len(data_set)
        label_counts = {}
        for feat_vec in data_set:
            current_label = feat_vec[-1]
            if current_label not in label_counts.keys():
                label_counts[current_label] = 0
            label_counts[current_label] += 1
        shannon_ent = 0.0
        for key in label_counts:
            prob = float(label_counts[key])/num_entries
            shannon_ent -= prob * log(prob, 2)
        return shannon_ent
    

The higher the entropy is, the more mixed up the data is. We will split the dataset in a way that will give us largest information gain.

Another common measure of disorder in a set is the Gini impurity, which is the probability of choosing an item from the set and the probability of that item being misclassified.

Splitting the dataset

  • Function split_data_set()

    def split_data_set(data_set, axis, value):
        ret_data_set = []
        for feat_vec in data_set:
            if feat_vec[axis] == value:
                reduced_feat_vec = feat_vec[:axis]
                reduced_feat_vec.extend(feat_vec[axis+1:])
                ret_data_set.append(reduced_feat_vec)
        return ret_data_set
    
  • Function choose_best_feature_to_split(data_set)

    def choose_best_feature_to_split(data_set):
        num_features = len(data_set[0]) - 1
        base_entropy = calc_shannon_ent(data_set)
        best_info_gain = 0.0
        best_feature = -1
        for i in range(num_features):
            feat_list = [example[i] for example in data_set]
            unique_vals = set(feat_list)
            new_entropy = 0.0
            for value in unique_vals:
                sub_data_set = split_data_set(data_set, i, value)
                prob = len(sub_data_set)/float(len(data_set))
                new_entorpy += prob * calc_shannon_ent(sub_data_set)
            info_gain = base_entropy - new_entropy
            if (info_gain > best_info_gain):
                best_info_gain = info_gain
                best_feature = I
        return best_feature
    

Recursively building the tree

If our dataset has run out of attributes but the class labels are not all the same, we must decide what to call that leaf node. In this situation, we'll take a majority vote.

  • Function majority_cnt()

    import operator
    def majority_cnt(class_list):
        class_count = {}
        for vote in class_list:
            if vote not in class_count.keys():
                class_count[vote] = 0
            class_count[vote] += 1
        sorted_class_count = sorted(class_count.items(), key = operator.itemgetter(1), reverse = True)
        return sorted_class_count[0][0]
    
  • Function create_tree()

    def createTree(data_set, labels):
        class_list = [example[-1] for example in data_set]
        # stop when all classes are equal
        if class_list.count(class_list[0]) == len(class_list):
            return class_list[0]
        # when no more features, return majority
        if len(data_set[0]) == 1:
            return majority_cnt(class_list)
        best_feat = choose_best_feature_to_split(data_set)
        best_feat_label = labels[best_feat]
        my_tree = {best_feat_label:{}}
        del(labels[best_feat])
        feat_values = [example[best_feat] for example in data_set]
        unique_vals = set(feat_values)
        for value in unique_vals:
            sub_labels = labels[:]
            my_tree[best_feat_label][value] = create_tree(split_data_set(data_set, best_feat, value), sub_labels)
        return my_tree
    

Plotting trees in Python with Matplotlib annotations

  • Plot tree nodes with text annotations

    import matplotlib.pyplot as plt
    
    decision_node = dict(boxstyle="sawtooth", fc="0.8")
    leaf_notde = dict(boxstyle="round4", fc="0.8")
    arrow_args = dict(arraystyle="<-")
    
    def plot_node(node_txt, center_pt, parent_pt, node_type):
        create_plot.ax1.annotate(node_txt, xy=parent_pt, xycoords='axes fraction', xytext=center_pt, bbox=node_type, arrowprops=arrow_args)
    
    def create_plot():
        fig = plt.figure(1, facecolor='white')
        fig.clf()
        create_plot.ax1 = plt.subplot(111, frameon=False)
        plot_node('a decision node', (0.5, 0.1), (0.1, 0.5), decision_node)
        plot_node('a leaf node', (0.8, 0.1), (0.3, 0.8), leaf_node)
        plt.show()
    

I need to modify the codes to apply this function to Python 3.x. For now, I'll just skip this section.

Testing and storing the classifier

Test: using the tree for classification

  • Function classify()

    def classify(input_tree, feat_labels, test_vec):
        # convert to list by hand ???
        first_str = list(input_tree.keys())[0]
        second_dict = input_tree[first_str]
        feat_index = feat_labels.index(first_str)
        for key in second_dict.keys():
            if test_vec[feat_index] == key:
                if type(second_dict[key]).__name__ == 'dict':
                    class_label = classify(second_dict[key], feat_labels, test_vec)
                else:
                    class_label = second_dict[key]
        return class_label
    

When classifying the data, the 'labels' variable is changed by the create_tree() function. We need to retrieve the labels again from create_data().

Use: persisting the decision tree

Building the tree would take a long time when it comes to large datasets. It would be a waste of time to build the tree every time. We're going to use the Python module, pickle, to serialize objects. Serializing objects allows you to store them for later use.

def store_tree(input_tree, filename):
    import pickle
    fw = open(filename, 'wb')
    pickle.dump(input_tree, fw)
    fw.close()

def grab_tree(filename):
    import pickle
    fr = open(filename, 'rb')
    return pickle.load(fr)

In this way, we can distill the dataset into some knowledge, and we use that knowledge only when we want to classify something.

Example: using decision trees to predict contact lens type

The Lenses dataset is a number of observations based on patients' eye conditions and the type of contact lenses the doctor prescribed. The classes are hard, soft, and no contact lenses. The data is from the UCI database repository and is modified slightly so that it can be displayed easier.

To load and classify the data:

>>> fr = open('lenses.txt')
>>> lenses = [inst.strip().split('\t') for inst in fr.readlines()]
>>> lenses_labels = ['age', 'prescript', 'astigmatic', 'tear_rate']
>>> lenses_tree = trees.create_tree(lenses, lenses_labels)

However, our tree matches our data too well. This problem is known as overfitting. To reduce the problem of overfitting, we can prune the tree. This will go through and remove some leaves. If a leaf node adds only a little information, it will be cut off and merged with another leaf. We will investigate this further when we revisit decision tree in chapter 9.

In chapter 9 we'll also investigate another decision tree algorithm called CART. The algorithm we used in this chapter, ID3, is good but not the best. ID3 can't handle numeric values. We could use continuous values by quantizing them inot discrete bins, but ID3 sufferes from other problems if we have too many splits.

Summary

There are other decision tree-generating algorithms. The most popular are C4.5 and CART. CART will be addressed in chapter 9 when we use it for regression.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,163评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,301评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,089评论 0 352
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,093评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,110评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,079评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,005评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,840评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,278评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,497评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,667评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,394评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,980评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,628评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,796评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,649评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,548评论 2 352

推荐阅读更多精彩内容