Overfitting and Regularization

Overfitting and Regularization

  • What should we do if our model is too complicated?
    • Fundamental causes of overfitting:
      complicated model (通常情况下是variance过大); limited learning data/labels
    • increase training data size
    • avoid over-training your dataset
      • filter out features:
        feture reduction
        principle component analysis (PCA)
      • regularization:
        ridge regression
        least absolute shrinkage and selection operator (LASSO) Logistic Regression-L2, -L1
Model Error
  • Error of regression models
    Error = bias^2 + variance + irreducible\, error
  • Bias measures how far off in general the model's predictions are from the correct value.
    Variance is how much the predictions for a given point vary between different realizations of the model.


    tradeoff.png

    image.png

    image.png
Ridge Regression(^2 regression); LASSO(^1 regression)
  • choose model -> calculate loss function -> numerical optimization
  • ridge regression improves the loss function definition of linear regression by introducing variance into the formula
  • L2 penalty \sum_{i=1}^n(y_i - \beta_0 - \sum_{j =1}^p\beta_jx_{ij})^2 + \lambda \sum_{j=1}^p\beta_j^2
    constraint conditioned optimization problem. Lagrange multiplier
  • Hyperparameter Optimization
    \lambda \uparrow \,: variance\,\downarrow\, bias\,\uparrow
    its value is usually obtained by test: cross validation
L1 L2 (^1 ^2 classification)
  • L1 penalty in loss function Linear: \sum_{i=1}^n(y_i - \beta_0 - \sum_{j =1}^p\beta_jx_{ij})^2 + \lambda \sum_{j=1}^p|\beta_j| Logistic: min log(P) = \sum^n_{i=1}[ - y_ilog(h_{\beta}(x_i)) - (1 - y_i)log(1 - h_\beta(x_i))] + \lambda||\beta||_1
  • L1 penalty in loss function Linear: \sum_{i=1}^n(y_i - \beta_0 - \sum_{j =1}^p\beta_jx_{ij})^2 + \lambda \sum_{j=1}^p\beta_j^2 Logistic: min log(P) = \sum^n_{i=1}[ - y_ilog(h_{\beta}(x_i)) - (1 - y_i)log(1 - h_\beta(x_i))] + \lambda||\beta||_2
  • Norm1 = (|x1| + |x2|)
    Norm2 = sqrt(x1^2 +x2^2)
    Norm3 = cub(x1^3 + x2^3)

Cross Validation

  • what is it:
    assess how your model result will generalize to another independent dataset
  • K-fold Cross Validation
  • classify the dataset into three parts -training dataset, validation set and testing set; then train a model from the training set, try (3) different lambdas to get three new models, calculate (3) errors in the validation set and determine which lambda has the least error. Then use this lambda and the whole dataset (traing, validation) to get a final model.Sometimes this method is biased because the traing set and validation set may have different characteristic distributions, so we k-fold our set and do cross validation on each choice of classification of set and calculate the average error.
  • model selection with cross validation:
    use cross validation method to do hyperparameter tuning
    cross validation can only validate your model selection

Confusion Matrix

p n
y true positive false positive
n false negative true negative
  • in "true positive":
    "true" means: you made a correct prediction
    "positive" means: what your prediction is
  • Different metrics(量度) for model evaluation:
    precision = tp / (tp + fp) spam
    recall = tp / (tp + fn) recall
    accuracy = (tp + tn) / all
    cyber security: recall is necessary, improve precision as much as possible
    if data is really imbalanced (a huge difference in n and p), look at precision and recall but not accuracy because negative can be too many and accuracy should be high by nature)

Result Evaluation Metric - ROC curve

  • receiver operating characteristic curve


    image.png
  • false positive rate = number of flase positive / number of real negative
    true positive rate = number of true positive / number of real positive
  • 同样的模型 同样的data 取不同的threshold得到的rate
    classifiter越凸、凹越好(积分与random chance积分的差越大越好)面积[0.5, 1] 面积越大分类越好
  • special points in ROC space
    best case (0, 1)
    worst case:(1, 0)
  • Area under the Curve of ROC (AUC)
    AUC value: [0, 1]
    The larger the value is, the better classification performance your classifier has.
    ROC AUC is the probability a randomly-chosen positive example is ranked more highly than a randomly-chosen negative example (campared to the ground truth).
    • 00110
      abcde
    • cabde
      10010 AUC= 0.8
    • dcabe ACU = 1
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,448评论 0 10
  • 旅行,让我们可以真切的感受世界,电影,告诉我们世界有无限可能。 电影或是综艺中的拍摄地,总能用故事将我们带入一个赋...
    海子房车阅读 322评论 0 0
  • 世界的尽头 都会有一座灯塔 指引归航
    闭上眼的狗狗阅读 266评论 0 0
  • 2016年5月15日(周日)傍晚,风声鹤唳,雷鸣电闪,大雨倾盆。顷刻间,水积路面。至校园一二百米处,早已是车辆拥堵...
    清晨新竹阅读 233评论 0 0