线性模型
3.0 生成学习和识别器
基本目标:获得映射,其中
为特征,
为类别
-
Bayes optimal classifier:
What is the most probable classification of the new instance given the training data?
Generative classifier:
-
e.g.,
Naive Bayes
LDA:
Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications.The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting (“curse of dimensionality”) and also reduce computational costs.
QDA:
Qualitative Data Analysis (QDA) is the range of processes and procedures used on the qualitative data that have been collected to transform them into some form of explanation, understandingorinterpretation of the people and situations that are being investigated.
QDA is usually based on an interpretative philosophy. The intention behind qualitative analysis is to answer the ‘why’, ‘what’ and ‘how’ questions, and to examine the meaningful and symbolic content of qualitative data.Non-par: nonparametric statistics
-
Explantion:
A generative classifier tries to learn the model that generates the data behind the scenes by estimating the assumptions and distributions of the model. It then uses this to predict unseen data, because it assumes the model that was learned captures the real model . As we will see, this is often not true. An example is the Naive Bayes Classifier.
-
Steps:
- Assume some functional form for
- Estimate patameters of
directly from training data
- Apply Bayes formula to calculate
- Assume some functional form for
-
Strengths:
- Model the input patterns
- Usually fast converge
- Cheap computation
- Robust to noise data
Weakness:
1. Can generate a sample of the data via
2. Ususally performs worse
-
Discriminative classifier
-
Explanation:
A discriminative classifier tries to model by just depending on the observed data. It makes fewer assumptions on the distributions but depends heavily on the quality of the data (Is it representative? Are there a lot of data?). An example is the Logistic Regression.
-
Steps:
- Assume some functional form for
- Estimate parameters of
directly from training data
- Directly learn
- Assume some functional form for
-
Strengths:
- Model
directly
- Model the decision boundary
- Usually good performance
- Model
-
Weakness:
- Cannot obtain a sample of the data since
is not available
- Slow convergence
- Expensive computation
- Sensitive to noise data
- Cannot obtain a sample of the data since
-
-
Generative classifier v.s. Discriminative classifier
- In Bayes Theorem, we have this formula below:
-
Concrete Example:
Given that a software engineer lives in Silicon Valley (X), what is the probability he makes over 100K salary (Y|X)?
-
Generative classifier:
To continue with our example, a generative classifier first estimate from the data what is the probability a software developer living in Silicon Valley (X) given he makes over 100K (Y), i.e. together it means P(X|Y)
Then, it estimates how many software engineers make over 100K, regardless of if he is in Silicon Valley or not, i.e. P(Y).
-
Discriminative classifier:
As with our example, a discriminative classifier estimates the probability of a software engineer making over 100K salary given he lives in Silicon Valley, i.e. P(Y|X)
-
3.1 线性模型的基本形式
- 给定
个属性描述的示例
,线性模型试图通过建立一个属性的线性组合函数来预测
-
此处的
可能是:
- 原始特征(Predictor variables): continuous or categorical
- 变形特征(Transformed predictors:
)
- Basic expansion:
- 交乘项(Interaction:
)
-
线性模型的特点:
- 简单,易于建模。许多非线性模型都是在线性模型的基础上通过引入层级结构或高维映射而获得的
- 反映了属性在预测中的重要性。当系数估计标准差相等的时候,其大小和符号表示了对预测影响的大小和方向
3.2 线性回归
- 假设有观测值
, 现在去用
去拟合线性模型来预测
3.2.1 无截距项
- 假设
是使得损失函数最优的参数
3.2.2 有截距项
模型表示为
假设(
)是使得损失函数最优的参数
因此有
注意到
3.2.3 将截距项放在系数矩阵中(多元线性回归)
-
试图获得映射
将和
吸收入向量形式
,即假设截距项是一个因子
将数据集
表示成如下矩阵:
将标记改写成:
有最优参数:
当满秩或正定时
-
hat matrix
记
则
恰好是
在{
}(即矩阵
的列空间)张成的
维子空间上的投影
其中投影矩阵
的性质有:
- 对称矩阵
- 幂等(idempotent)
-
for all
and
for all
- 对称矩阵
3.2.4 多因子模型(多元回归)的因子正交化处理
-
正交化的初衷
对于一元回归模型
, 其系数
的解为:
对比多元回归的到的系数解
可知如果所有的解释变量(的列向量)都两两正交
,则向量
的每一个分量恰好等于
说明所有解释变量相互正交时,不同的因子对彼此的参数估计没有任何影响
-
回归的几何意义
将
的表达式带入回归模型得到
的表达式,并计算
和
的内积,得到
说明残差和解释变量
正交
img对于一元回归
的情况,回归实际上将
垂直投影(orthogonally projected onto)到
之上,使得
的模长最小
img对于二元回归
,假设
和
之间是正交的。其几何意义是将
垂直投影到
和
生成的超平面内,此时可再将
投影到
上去。由于
正交,则
恰好是两个向量的和,
和
恰好是单位化后
在
上的投影。
当和
非正交时,
不再是两个向量的和,解释变量之间对各自的回归系数有不同的作用
-
正交化过程求解多元回归
Gram-Schmit正交化过程
对于3的
当满足独立同分布时,假设方程为
,可以证明回归系数
的方差和
的大小成反比:
由于相互正交,则对应的回归系数为
Drygas逆向求解非正交因子回归系数
由此解出未正交化的各因子对应回归系数
3.3 线性回归的变形
- 对数线性回归:对应输出标记在指数尺度上变化
广义线性模型:
成为连接函数,单调可微
对数几率回归
若将
视为
作为正例的可能,
视为
作为负例的可能,二者的比值成为几率(odds)
成为对数几率
参考文献
- https://zhuanlan.zhihu.com/p/41993542
- Drygas, H. (2011). On the Relationship between the Method of Least Squares and Gram-Schmidt Orthogonalization. Acta et Commentationes Universitatis Tartuensis de Mathematica, Vol. 15(1), 3 – 13.
- Hastie, T., R. Tibshirani, and J. Friedman (2016). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Ed. Springer.