做机器学习时, 大家经常会看一下数据之间的相关性, 毕竟pandas也就一句话的事:
dataframe.corr()
. 但是有没有想过为什么要做? 是不是不管什么模型都需要做?
为什么要做?
Correlated features, in general, don't improve models (although it depends on the specifics of the problem like the number of variables and the degree of correlation), but they affect specific models in different ways and to varying extents:
For linear models (e.g., linear regression or logistic regression), multicolinearity can yield solutions that are wildly varying and possibly numerically unstable.
Random forests can be good at detecting interactions between different features, but highly correlated features can mask these interactions.
More generally, this can be viewed as a special case of Occam's razor. A simpler model is preferable, and, in some sense, a model with fewer features is simpler. The concept of minimum description length makes this more precise.
树相关的集成模型要不要做?
不需要. 例如xgboost, 在计算gini index时候会给相同的feature 50%/50%的机会.
而相反, 处理缺失值, 却是对模型影响很大的.
那到底要不要做?
这个链接里有个人提到可以用相关性恢复缺失值, 这个直观上应该可行, 不过需要具体实验才知道, 并且这个相关性不能很强. 设想, 如果很强那么对于你的模型不就没有帮助了么?(那我在想, 那干嘛不直接把缺失值当作一个预测量来用其他变量预测呢...)
事实上, 本身不管用相关性(线性关系, 例如 皮尔逊积差系数)还是各种fancy的插值方法? 本质上是假设了数据之间的某种关系.