sklearn API 参数解析 —— CART

CART是分类与回归树(Classification and Regression Trees, CART),是一棵二叉树,可用于回归与分类。

下面是分类树


class sklearn.tree.DecisionTreeClassifier(criterion=’gini’splitter=’best’max_depth=Nonemin_samples_split=2min_samples_leaf=1min_weight_fraction_leaf=0.0max_features=Nonerandom_state=Nonemax_leaf_nodes=Nonemin_impurity_decrease=0.0min_impurity_split=Noneclass_weight=Nonepresort=False)


Parameters:

1)criterion : string, optional (default=”gini”)

The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.

衡量生成树的纯度,可选择基尼系数 'gini',或者信息熵 'entropy'.

节点 t 的基尼系数与信息熵计算公式:

                 Gini(t)=1-\sum_{i\in t}p_{i} ^2

                  Entropy(t)=-\sum_{i\in t} p_{i}logp_{i}

2)splitter : string, optional (default=”best”)

The strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.

分裂点选择,有两种方式可选择,'best' 与 'random'。

(???)

3)max_depth : int or None, optional (default=None)

The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.

设置决策树的最大深度。如果为None,停止条件为:

1)所有叶子节点不纯度为0

2)叶子节点包含样本个数低于 'min_samples_split'

4)min_samples_split : int, float, optional (default=2)

The minimum number of samples required to split an internal node:

If int, then consider min_samples_split as the minimum number.

If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.

Changed in version 0.18: Added float values for fractions.

内部节点包含的最少样本数,可输入整数或者小数;即如果内部节点包含样本书低于这个值,则不再分裂,直接作为叶子节点。

如果输入整数 d,min_samples_split = d;

如果输入小数 f , min_samples_split = f * N;N为样本总数

5)min_samples_leaf : int, float, optional (default=1)

The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.

If int, then consider min_samples_leaf as the minimum number.

If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.

Changed in version 0.18: Added float values for fractions.

叶子节点包含的最少样本数。

如果一个节点分裂后,左右子节点包含样本书低于min_samples_leaf,则不可以分裂

6)min_weight_fraction_leaf : float, optional (default=0.)

The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.

叶子节点包含样本的最小权重值。

7)max_features : int, float, string or None, optional (default=None)

The number of features to consider when looking for the best split:

If int, then consider max_features features at each split.

If float, then max_features is a fraction and int(max_features * n_features)features are considered at each split.

If “auto”, then max_features=sqrt(n_features).

If “sqrt”, then max_features=sqrt(n_features).

If “log2”, then max_features=log2(n_features).

If None, then max_features=n_features.

Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.

节点分裂时考虑的最大特征数量。

8)random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

随机种子/随机生成器。

9)max_leaf_nodes : int or None, optional (default=None)

Grow a tree with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.

叶子节点最大个数。

10)min_impurity_decrease : float, optional (default=0.)

A node will be split if this split induces a decrease of the impurity greater than or equal to this value.

The weighted impurity decrease equation is the following:

N_t  / N  *  (impurity  -  N_t_R  /  N_t  *  right_impurity  

                                 -   N_t_L  /  N_t  *  left_impurity)

where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child.

N, N_t, N_t_R and N_t_L all refer to the weighted sum, if sample_weight is passed.

New in version 0.19.

不纯度最小减少值,即分裂节点时必须满足减少的不纯度大于这个值。

如果设置了样本权重,N, N_t, N_t_R and N_t_L指的是加权和,否则只是样本计数。

11)min_impurity_split : float, (default=1e-7)

Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf.

Deprecated since version 0.19: min_impurity_split has been deprecated in favor of min_impurity_decrease in 0.19. The default value of min_impurity_split will change from 1 e-7 to 0 in 0.23 and it will be removed in 0.25. Use min_impurity_decrease instead.

节点停止分裂的不纯度阈值。与上一参数作用相同。0.25版本后删除。

12)class_weight : dict, list of dicts, “balanced” or None, default=None

Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y.

Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))

For multi-output, the weights of each column of y will be multiplied.

Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

设置类别权重,参数 dict, list of dicts,'balanced'  或者 'None'

输入list, list of dict,手动设置类别权重。

输入 'balanced', 模型利用公式 n_samples / (n_classes * np.bincount(y)) 对每个类别样本数量自动调整权重,使得每个类别的权重相同。即某个类别样本数量少,单个样本权重就大;类别样本数量多,单个样本权重就小。最终所有类别的权重都一样。当样本分布不均时,可以这样操作。

输入 'None', 默认样本权重都一样。

13)presort : bool, optional (default=False)

Whether to presort the data to speed up the finding of best splits in fitting. For the default settings of a decision tree on large datasets, setting this to true may slow down the training process. When using either a smaller dataset or a restricted depth, this may speed up the training.

是否对样本进行预分类。

即节点寻找最优分裂点之前,对样本进行预排序,加快找到最优分裂点;由于增加了一步操作,数据集小的时候可以加快速度;但数据集大的时候反而会减慢速度。


Attributes:

1)classes_ : array of shape = [n_classes] or a list of such arrays

The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).

标签列表

2)feature_importances_ : array of shape = [n_features]

Return the feature importances.

特征重要系数

3)max_features_ : int,

The inferred value of max_features.

特征数量

4)n_classes_ : int or list

The number of classes (for single output problems), or a list containing the number of classes for each output (for multi-output problems).

5)n_features_ : int

The number of features when fit is performed.

训练时用到的特征数量

(???)

6)n_outputs_ : int

The number of outputs when fit is performed.

7)tree_ : Tree object

The underlying Tree object. Please refer to help(sklearn.tree._tree.Tree) for attributes of Tree object and Understanding the decision tree structure for basic usage of these attributes.

返回决策树对象(sklearn.tree._tree.Tree)

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 214,444评论 6 496
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,421评论 3 389
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 160,036评论 0 349
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,363评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,460评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,502评论 1 292
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,511评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,280评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,736评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,014评论 2 328
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,190评论 1 342
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,848评论 5 338
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,531评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,159评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,411评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,067评论 2 365
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,078评论 2 352

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,319评论 0 10
  • 昔年你我有灵犀, 眉目一顾两相宜, 而今纵有千般语, 唇冷齿凉惘叹息!
    廿围阅读 409评论 0 0
  • sss
    727上上上阅读 130评论 0 0
  • 小聪明,在某些程度上含着“奸诈”的意思。 这种耍小聪明的人从古至今一直都有,在那些人眼里,就是认为自己很聪明。其实...
    景川和景继川是一个人阅读 454评论 0 0
  • '喜欢一个人 不需要理由 一个忧郁的眼神 一个不经意的小动作 就足以穿透整个季节 喜欢一个人 不需要解释 飘飘的长...
    沉浮梦醒阅读 402评论 8 8