介绍了用LR,Random Forest,KNN,SVM,神经网络来实现预测,并做了几种方法的比较。
KNN
sklearn.neighbors.
KNeighborsClassifier
(*n_neighbors=5*, *weights='uniform'*, *algorithm='auto'*, *leaf_size=30*, *p=2*, *metric='minkowski'*, *metric_params=None*, *n_jobs=1*, ***kwargs*)
需要选择的参数:
- n_neighbors (即K)
- weights (K大的时候可以选distance降低权重)
- algorithm (只影响预测时间,不影响精度)
- p (p=2时是欧拉距离,p=1就是曼哈顿距离)
疑问
- 没有把图像二值化?
完整代码:
KNN_choosePara.py
# -*- coding: UTF-8 -*-
import pandas as pd
import numpy as np
import time
from sklearn.cross_validation import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
#for fast evaluation
X_train_small = X_train[:10000, :]
y_train_small = y_train[:10000]
X_test = pd.read_csv("../test.csv").values
#knn
#-----------------------用于小范围测试选择最佳参数-----------------------------#
#begin time
start = time.clock()
#progressing
print "selecting best paramater range"
knn_clf=KNeighborsClassifier(n_neighbors=5, algorithm='kd_tree', weights='distance', p=3)
score = cross_val_score(knn_clf, X_train_small, y_train_small, cv=3)
print( score.mean() )
#end time
elapsed = (time.clock() - start)
print("Time used:",int(elapsed), "s")
#k=3
#0.942300738697
#0.946100822903 weights='distance'
#0.950799888775 p=3
#k=5
#0.939899237556
#0.94259888029
#k=7
#0.935395994386
#0.938997377902
#k=9
#0.933897851978
KNN_predict.py:
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
clf=KNeighborsClassifier(n_neighbors=5, algorithm='kd_tree', weights='distance', p=3)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("train.csv")
clf.fit(X_train,y_train) #针对整个训练集训练分类器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
print "predicting"
result=clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #转化为int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.knn.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# choosing parameters
# 0.947298365455 score
# ('Time used:', 983, 's')
# reading data
# ('Training Time used:', 0, 'min')
# predicting
# ('Test Time used:', 244, 'min')
# 0.97214 final score
</br></br>
LR
sklearn.model_selection.GridSearchCV
(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1,
iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs',
error_score='raise', return_train_score=True)
需要选择的参数:
- estimator : lr_clf
- param_grid : 参数列表,字典格式
- n_jobs :并行计算数?
完整代码:
LR_choosePara:
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
import pandas as pd
import numpy as np
import time
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
start=time.clock()
lr_clf=LogisticRegression(solver='newton-cg',multi_class='ovr',max_iter=100,C=1)
# 用GridSearchCV寻找最佳参数空间
parameters={'penalty:':['12'],'C':[2e-2, 4e-2,8e-2, 12e-2, 2e-1]} #dict格式的参数列表
gs_clf=GridSearchCV(lr_clf,parameters,n_jobs=1,verbose=True) #estimator, fit_params, n_jobs,
gs_clf.fit(X_train_small.astype('float')/256,y_train_small)
#打印最佳参数空间结果
print()
for params, mean_score, scores in gs_clf.grid_scores_:
print "%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params)
print()
elapsed=(time.clock()-start)
print "time used:",elapsed
</br></br>
SVM
SVC和NuSVC的区别:
NuSVC是核支持向量分类,和SVC类似,但不同的是通过一个参数空值支持向量的个数。
SVM_choosePara.py
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
import pandas as pd
import numpy as np
import time
from sklearn.svm import SVC, NuSVC
from sklearn.grid_search import GridSearchCV
start=time.clock()
#classificator
svm_clf=NuSVC(nu='0.1',kernel='rbf',gamma=0.1,verbose=True)
#choose the parameters
parameters=[{'nu':[0.05,0.02],'gamma':[3e-2, 2e-2, 1e-2]}]
gs_clf=GridSearchCV(svm_clf,parameters,n_jobs=1,verbose=True)
gs_clf.fit(X_train_small.astype('float')/256,y_train_small)
print()
for params,mean_score, scores in gs_clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params))
print()
elapsed=time.clock()-start
print("Time used:",elapsed)
SVM_predict.py
# -*- coding: UTF-8 -*-
import time
import numpy as np
import pandas as pd
from sklearn.svm import SVC, NuSVC
svm_clf=NuSVC(nu=0.2,kernel='rbf',gamma=0.2,verbose=True)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
svm_clf.fit(X_train,y_train) #针对整个训练集训练分类器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
print "predicting"
X_test = pd.read_csv("../test.csv").values
result=svm_clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #转化为int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.svm.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# optimization finished, #iter = 16982
# C = 49.988524
# obj = 4126.039937, rho = 0.015150
# nSV = 8251, nBSV = 0
#
# Total nSV = 42000
# ('Training Time used:', 175, 'min')
# predicting
# ('Test Time used:', 201, 'min')
#0.11614
不知道为什么,测试选参数的时候得分还是蛮高的,预测就只有0.1,还需要在看看。
</br></br>
Random forest
随机森林,因为之前几个算法跑起来都需要一个小时以上,随机森林选参+测试用了不到十分钟,将信将疑的把结果上传kaggle,得分居然也很高, 先上代码,之后再分析。
RF_choosePara.py
# -*- coding: UTF-8 -*-
from sklearn.ensemble import RandomForestClassifier
import time
import numpy as np
from sklearn.grid_search import GridSearchCV
import pandas as pd
#begin time
start = time.clock()
#reading data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
#for fast evaluation
X_train_small = X_train[:10000, :]
y_train_small = y_train[:10000]
elapsed = (time.clock() - start)
#progressing
parameters = {'criterion':['gini','entropy'] , 'max_features':['auto', 12, 100]}
rf_clf=RandomForestClassifier(n_estimators=400, n_jobs=4, verbose=1)
gs_clf = GridSearchCV(rf_clf, parameters, n_jobs=1, verbose=True )
gs_clf.fit( X_train_small.astype('int'), y_train_small )
print()
for params, mean_score, scores in gs_clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params))
print()
#end time
elapsed = (time.clock() - start)
print("Time used:",elapsed) #seconds
# 0.946 (+/-0.003) for {'max_features': 'auto', 'criterion': 'gini'}
# 0.947 (+/-0.001) for {'max_features': 12, 'criterion': 'gini'}
# 0.944 (+/-0.004) for {'max_features': 100, 'criterion': 'gini'}
# 0.945 (+/-0.004) for {'max_features': 'auto', 'criterion': 'entropy'}
# 0.946 (+/-0.003) for {'max_features': 12, 'criterion': 'entropy'}
# 0.941 (+/-0.004) for {'max_features': 100, 'criterion': 'entropy'}
# ()
# ('Time used:', 814.006147)
RF_predict.py
# -*- coding: UTF-8 -*-
import time
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
clf=RandomForestClassifier(n_estimators=12)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
print "fitting the model"
clf.fit(X_train,y_train) #针对整个训练集训练分类器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
#predicting data
print "predicting"
X_test = pd.read_csv("../test.csv").values
result=clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #转化为int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.rf.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# reading data
# fitting the model
# ('Training Time used:', 0, 'min')
# predicting
# ('Test Time used:', 0, 'min')
#0.94629
随机森林属于集成学习的方法。
随机森林由LeoBreiman(2001)提出,它通过自助法(bootstrap)重采样技术,从原始训练样本集N中有放回地重复随机抽取k个样本生成新的训练样本集合,然后根据自助样本集生成k个分类树组成随机森林,新数据的分类结果按分类树投票多少形成的分数而定。其实质是对决策树算法的一种改进,将多个决策树合并在一起,每棵树的建立依赖于一个独立抽取的样品,森林中的每棵树具有相同的分布,分类误差取决于每一棵树的分类能力和它们之间的相关性。特征选择采用随机的方法去分裂每一个节点,然后比较不同情况下产生的误差。能够检测到的内在估计误差、分类能力和相关性决定选择特征的数目。单棵树的分类能力可能很小,但在随机产生大量的决策树后,一个测试样品可以通过每一棵树的分类结果经统计后选择最可能的分类
随机森林优点
a. 在数据集上表现良好,两个随机性的引入,使得随机森林不容易陷入过拟合
b. 在当前的很多数据集上,相对其他算法有着很大的优势,两个随机性的引入,使得随机森林具有很好的抗噪声能力
c. 它能够处理很高维度(feature很多)的数据,并且不用做特征选择,对数据集的适应能力强:既能处理离散型数据,也能处理连续型数据,数据集无需规范化
d. 可生成一个Proximities=(pij)矩阵,用于度量样本之间的相似性: pij=aij/N, aij表示样本i和j出现在随机森林中同一个叶子结点的次数,N随机森林中树的颗数
e. 在创建随机森林的时候,对generlization error使用的是无偏估计
f. 训练速度快,可以得到变量重要性排序(两种:基于OOB误分率的增加量和基于分裂时的GINI下降量
g. 在训练过程中,能够检测到feature间的互相影响
h. 容易做成并行化方法
i. 实现比较简单
</br>
LR线性模型显然最弱。神经网络处理这种图像问题确实目前是最强的。svm的support vector在这里起到作用非常明显,准确地找出了最具区分度的“特征图像”。RF有点像非线性问题的万金油,这里默认参数已经很可以了。只比KNN结果稍微差一点,因为只用了像素的局部信息。当然了,模型的对比这里只针对数字识别的问题,对于其他问题可能有不同的结果,要具体问题具体分析,结合模型特点,选取合适的模型。