这个很多人应该也了解过,也就不介绍了。本文参考了一些实例,也有一些自己的思考,最终正确率80%左右,有时间继续改进。首先读入数据:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LogisticRegression
import re
train_df = pd.read_csv ('titanic_train.csv')
train_df.head()
主要包含以下信息:
# PassengerId 乘客ID
# Survived 生存率
# Pclass 乘客仓位 1/2/3等仓位
# Name 乘客姓名
# Sex 性别
# Age 年龄
# SibSp 堂兄弟姐妹个数
# Parch 父母与小孩的个数
# Ticket 船票信息
# Fare 票价
# Cabin 客舱
# Embarked 登船港口
参数很多,一点点分析吧
接下来看看各特征与生存率的关系如何,首先看看性别、年龄、船舱等级与获救的情况
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签
plt.rcParams['axes.unicode_minus']=False #用来正常显示负号
fig = plt.figure()
plt.scatter(train_df.Survived,train_df.Age)
plt.ylabel(u"年龄")
plt.title(u"年龄获救分布")
print train_df.Age.count()
plt.show()
可以看出年纪大的,65岁以上的生存率较低,其他的就看不出什么,相关性没有想象中那么大,同理,按性别划分:
性别特征影响就大了,男性的死亡率远大于女性,女性的生存率相当的大,果然电影里说的女人和小孩优先不是乱说的
果然,一等舱获救概率最高,钱还是个好东西看了基本情况,接下来开始对数据进行清洗年龄需要补全,刚看了年龄似乎没有太大的区分,是否可以转换成小孩,中青年,老年人这几类呢;Cabin参数各种各样,最好分为有Cabin和没有Cabin两类;姓名也需要处理,姓名长的是否影响很大呢,称谓是否也有影响;SibSp与Parch是否可以合为一类呢,命名为FamilySize首先补全年龄,并作出区分
首先补全年龄,并按照小孩,中年,老年人分类
#首先补全年龄,并按照小孩,中年,老年人分类
fig = plt.figure()
#print train_df.describe()
def set_missing_ages(df):
# 把已有的数值型特征取出来丢进Random Forest Regressor中
age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']
known_age = age_df[age_df.Age.notnull()].as_matrix()
unknown_age = age_df[age_df.Age.isnull()].as_matrix()
# y即目标年龄
y = known_age[:, 0]
# X即特征属性值
X = known_age[:, 1:]
# fit到RandomForestRegressor之中
rfr = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1)
rfr.fit(X, y)
predictedAges = rfr.predict(unknown_age[:, 1::])
#print predictedAges
df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges
return df, rfr
def set_new_Age(df):
df['new_Age'] = 1
df.loc[train_df["Age"] <=12,'new_Age'] = 0
df.loc[train_df["Age"] >= 60 , 'new_Age'] = 2
return df
train_df, rfr=set_missing_ages(train_df)
train_df = set_new_Age(train_df)
Survived_Age0 = train_df.new_Age[train_df.Survived == 0].value_counts()
Survived_Age1 = train_df.new_Age[train_df.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_Age1, u'未获救':Survived_Age0})
df.plot(kind='bar', stacked=True)
plt.show()
可以看出,小孩的获救概率最大,60岁上的老人几乎不能获救,中青年获救几率一般
接下来看看有无船舱的获救情况
可以看出,有船舱的获救概率很大,这个指标的影响也挺大的
接下来看看name这个特征,长度以及称谓都可以提取出来看看,这里查看title这个特征
恩,title的影响果然很大,名字也不是随便取的啊,不过换算到中国估计这种方法就不好使了,接下来再把性别转换一下,又新添加两个FamilySize和NameLength的特征,就不具体分析了。
这样一个一个看有些麻烦,采用sklearn中的一个特征选择库能很快选出哪些特征最重要
from sklearn.feature_selection import SelectKBest, f_classif
import matplotlib.pyplot as plt
print train_df
predictors = ["Pclass","Cabin","Sex", "new_Age", "SibSp", "Parch", "Fare", "FamilySize", "Title", "NameLength"]
#train_df[predictors]
selector = SelectKBest(f_classif, k=5)
selector.fit(train_df[predictors], train_df["Survived"])
scores = -np.log10(selector.pvalues_)
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
可以看出Pclass,Cabin,sex,fare,title,nameLength这几个指标的影响最大
接下来对测试集采用同样的操作
test_df = pd.read_csv('test.csv', header=0)
test_df['Sex'] = test_df['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
if len(test_df.Fare[ test_df.Fare.isnull() ]) > 0:
median_fare = np.zeros(3)
for f in range(0,3): # loop 0 to 2
median_fare[f] = test_df[ test_df.Pclass == f+1 ]['Fare'].dropna().median()
for f in range(0,3): # loop 0 to 2
test_df.loc[ (test_df.Fare.isnull()) & (test_df.Pclass == f+1 ), 'Fare'] = median_fare[f]
test_df["FamilySize"] = test_df["SibSp"] + test_df["Parch"]
test_df["NameLength"] = test_df["Name"].apply(lambda x: len(x))
titles = test_df["Name"].apply(get_title)
#print pd.value_counts(titles)
title_dict = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8,
"Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2,"Dona": 2}
for key, value in title_dict.items():
titles[titles == key] = value
test_df["Title"] = titles
#年龄操作
age_predictor = test_df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]
null_age = age_predictor[test_df.Age.isnull()].as_matrix()
X = null_age[:,1:]
predictedAges = rfr.predict(X)
test_df.loc[ (test_df.Age.isnull()), 'Age' ] = predictedAges
test_df = set_new_Age(test_df)
test_df = set_Cabin_type(test_df)
带入模型进行预测
带入模型进行预测
predictors = ["Pclass","Cabin","Sex", "new_Age", "Fare","FamilySize","NameLength","Title"]
#采用两种随机森林和逻辑回归的融合算法
algorithms = [
RandomForestClassifier(n_estimators=100,min_samples_split=4, min_samples_leaf=2),
LogisticRegression(random_state=1)
]
full_predictions = []
output = []
for alg in algorithms:
alg.fit(train_df[predictors].as_matrix(), train_df["Survived"])
predictions = alg.predict_proba(test_df[predictors].as_matrix()).astype(float)[:,1]
full_predictions.append(predictions)
output = (full_predictions[0]*3 + full_predictions[1])/4
#forest = RandomForestClassifier(n_estimators=100,min_samples_split=4, min_samples_leaf=2)
output[output <=0.5] = 0
output[output >0.5] = 1
output = output.astype(int)
print output
print 'Predicting...'
#output = alg.predict(test_df[predictors].as_matrix()).astype(int)
predictions_file = open("third.csv", "wb")
open_file_object = csv.writer(predictions_file)
open_file_object.writerow(["PassengerId","Survived"])
open_file_object.writerows(zip(ids, output))
predictions_file.close()
print 'Done.'
OK,到这里基本就完成了,最后的效果一般,80%,排名2000左右,不算太好,需要继续从以下几个方面思考:1、还有什么特征可以提取 2、算法只是单纯的调用了,参数还有什么可以调整的,是否有更好的算法。