数据和特征决定了机器学习的上限,而模型和算法只是逼近这个上限而已。
在数据建模前期,很重要的一步是特征工程的构建,最明显的一次就是多分类问题,单增加一个特征,准确率从78%提高到了85%。
在数据预处理之后,我们同时也会构造很多的指标,无论是根据业务逻辑,还是粗暴的特征交叉(可用sklearn的PolynomialFeatures)进行构建特征,最后,我们都需要选择有意义的特征进行训练。通常来说,从两个方面考虑来选择特征:
1、特征是否发散:如果一个特征不发散,例如方差接近于0,也就是说样本在这个特征上基本上没有差异,这个特征对于样本的区分并没有什么用。
2、特征与目标的相关性:这点比较显见,与目标相关性高的特征,应当优选选择。除方差法外,本文介绍的其他方法均从相关性考虑。
一、基于L1的特征选取,并根据系数绝对值排序
from sklearn.svm import LinearSVC
def l1_feature(x, y1, feat_labels):
lsvc = LinearSVC(C=0.01, penalty="l1", dual=False).fit(x, y1.astype('int'))
cofe_lsvc = lsvc.coef_.T.sum(axis = 1)
l1_result = pd.DataFrame(abs(cofe_lsvc),columns = ['SCORE'])
l1_result['RANK'] = l1_result.rank(ascending = False)
l1_result['NAME'] = feat_labels
return l1_result
二、训练随机森林模型,并根据绝对值排序
def forest_feature(x, y1, feat_labels):
x_train, x_test, y_train, y_test = train_test_split(x, y1, test_size = 0.3, random_state = 0)
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(x_train, y_train.astype('int'))
#打印特征重要性评分
importances = pd.DataFrame(forest.feature_importances_,columns = ['SCORE'])
#indices = np.argsort(importances)[::-1]
importances['RANK'] = importances.rank(ascending = False)
importances['NAME'] = feat_labels
return importances
三、皮尔逊相关系数-越大越好\P值-越小越好
def pear_feature(x, y1, feat_labels):
pear_result = []
P_result = []
for i in range(x.shape[1]):
x_bak = data_pro_pear(x,i)
pear_result.append(abs(pearsonr(x_bak,y1)[0]))
P_result.append(pearsonr(x_bak,y1)[1])
pear_result_1 = pd.DataFrame(pear_result,columns = ['SCORE'])
P_result_1 = pd.DataFrame(P_result,columns = ['SCORE'])
pear_result_1['RANK'] = pear_result_1.rank(ascending = False)
pear_result_1['NAME'] = feat_labels
P_result_1['RANK'] = P_result_1.rank(ascending = True)
P_result_1['NAME'] = feat_labels
return pear_result_1, P_result_1
四、卡方检验
def chi2_feature(x, y1, feat_labels):
model1 = SelectKBest(chi2, k=2)#选择k个最佳特征
model1.fit_transform(x, y1.astype('int'))
chi2_result = pd.DataFrame(model1.scores_,columns = ['SCORE'])
chi2_result['RANK'] = chi2_result.rank(ascending = False)
chi2_result['NAME'] = feat_labels
return chi2_result
五、GBDT
def gbdt_feature(x, y1, feat_labels):
gbdt=GradientBoostingRegressor(
loss='ls'
, learning_rate=0.1
, n_estimators=100
, subsample=1
, min_samples_split=2
, min_samples_leaf=1
, max_depth=3
, init=None
, random_state=None
, max_features=None
, alpha=0.9
, verbose=0
, max_leaf_nodes=None
, warm_start=False
)
gbdt.fit(x,y1)
score = gbdt.feature_importances_
gbdt_result = pd.DataFrame(score,columns = ['SCORE'])
gbdt_result['RANK'] = gbdt_result.rank(ascending = False)
gbdt_result['NAME'] = feat_labels
return gbdt_result
然后再根据每个方法的排名求得总排名就好啦。