scikit-learn中的LogitsticRegression参数含义

链接:http://70b86a48.wiz03.com/share/s/1MK6F81-vQ1i2DFlsT0ux-iU2qccii0xCkjZ2Si7Lw1pfOQ3

截图:

原文:

class sklearn.linear_model.LogisticRegression(penalty=’l2’dual=Falsetol=0.0001C=1.0fit_intercept=Trueintercept_scaling=1class_weight=Nonerandom_state=Nonesolver=’liblinear’max_iter=100multi_class=’ovr’verbose=0warm_start=Falsen_jobs=1)

Parameters:

penalty : str, ‘l1’ or ‘l2’, default: ‘l2’

Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties.

New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)

dual : bool, default: False

Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.

tol : float, default: 1e-4

Tolerance for stopping criteria.

C : float, default: 1.0

Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.

fit_intercept : bool, default: True

Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.

intercept_scaling : float, default 1.

Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight.

Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.

class_weight : dict or ‘balanced’, default: None

Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one.

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).

Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

New in version 0.17: class_weight=’balanced’

random_state : int, RandomState instance or None, optional, default: None

The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when solver == ‘sag’ or ‘liblinear’.

solver : {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’},

default: ‘liblinear’ Algorithm to use in the optimization problem.

For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and

‘saga’ are faster for large ones.

For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’

handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.

‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas

‘liblinear’ and ‘saga’ handle L1 penalty.

Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.

New in version 0.17: Stochastic Average Gradient descent solver.

New in version 0.19: SAGA solver.

max_iter : int, default: 100

Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge.

multi_class : str, {‘ovr’, ‘multinomial’}, default: ‘ovr’

Multiclass option can be either ‘ovr’ or ‘multinomial’. If the option chosen is ‘ovr’, then a binary problem is fit for each label. Else the loss minimised is the multinomial loss fit across the entire probability distribution. Does not work for liblinear solver.

New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case.

verbose : int, default: 0

For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.

warm_start : bool, default: False

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver.

New in version 0.17: warm_start to support lbfgsnewton-cgsagsaga solvers.

n_jobs : int, default: 1

Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the ``solver``is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. If given a value of -1, all cores are used.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • 开在阳光侧面的山丹花苍凉而苦涩的黄土塬咳出的一滩梗在心头的血 单薄的怯羞浓艳的红 吃黄土喝凉风长大的山丹花,燃烧的...
    竹无心a阅读 4,231评论 24 31
  • 九重天上,太晨宫内,司命手持一封请柬而来,东华在书房内,处理着近日刚刚飞升的仙君的定官阶之事,司命进入书房,拱手行...
    转角花开阅读 8,491评论 1 41
  • 总要为自己的选择买单,既然发生了我都欣然接受,依然相信所有经过之路都是必经之路。 那倒计时的100秒中包含了你我所...
    维多利亚没有夜阅读 1,891评论 0 0
  • Frank He 3/10/2018 约定数据源用椭圆形,数据池用方形,函数用同心圆,变量用圆形,loss用菱形,...
    惊起却回首阅读 4,603评论 0 2
  • 暖风四月 伴着绵绵细雨 春光洒在书页上 淋湿了读者的心灵 读书日 让我们在纷繁复杂中感受静谧简单 去书中感受这春日...
    jennydeng_阅读 1,513评论 0 0

友情链接更多精彩内容