数据清洗

【回顾&引言】前面一章的内容大家可以感觉到我们主要是对基础知识做一个梳理,让大家了解数据分析的一些操作,主要做了数据的各个角度的观察。那么在这里,我们主要是做数据分析的流程性学习,主要是包括了数据清洗以及数据的特征处理,数据重构以及数据可视化。这些内容是为数据分析最后的建模和模型评价做一个铺垫。

2 第二章:数据清洗及特征处理

开始之前,导入numpy、pandas包和数据

#加载所需的库importnumpyasnpimportpandasaspd

#加载数据train.csvdf = pd.read_csv('train.csv')df.head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

数据清洗简述

我们拿到的数据通常是不干净的,所谓的不干净,就是数据中有缺失值,有一些异常点等,需要经过一定的处理才能继续做后面的分析或建模,所以拿到数据的第一步是进行数据清洗,本章我们将学习缺失值、重复值、字符串和数据转换等操作,将数据清洗成可以分析或建模的样子。

2.1 缺失值观察与处理

我们拿到的数据经常会有很多缺失值,比如我们可以看到Cabin列存在NaN,那其他列还有没有缺失值,这些缺失值要怎么处理呢

2.1.1 任务一:缺失值观察

(1) 请查看每个特征缺失值个数

(2) 请查看Age, Cabin, Embarked列的数据 以上方式都有多种方式,所以建议大家学习的时候多多益善

#方法一df.info()

<class 'pandas.core.frame.DataFrame'>

RangeIndex: 891 entries, 0 to 890

Data columns (total 12 columns):

#  Column      Non-Null Count  Dtype 

---  ------      --------------  ----- 

0  PassengerId  891 non-null    int64 

1  Survived    891 non-null    int64 

2  Pclass      891 non-null    int64 

3  Name        891 non-null    object

4  Sex          891 non-null    object

5  Age          714 non-null    float64

6  SibSp        891 non-null    int64 

7  Parch        891 non-null    int64 

8  Ticket      891 non-null    object

9  Fare        891 non-null    float64

10  Cabin        204 non-null    object

11  Embarked    889 non-null    object

dtypes: float64(2), int64(5), object(5)

memory usage: 83.7+ KB

#方法二df.isnull().sum()

PassengerId      0

Survived        0

Pclass          0

Name            0

Sex              0

Age            177

SibSp            0

Parch            0

Ticket          0

Fare            0

Cabin          687

Embarked        2

dtype: int64

df[['Age','Cabin','Embarked']].head(3)

AgeCabinEmbarked

022.0NaNS

138.0C85C

226.0NaNS

2.1.2 任务二:对缺失值进行处理

(1)处理缺失值一般有几种思路

(2) 请尝试对Age列的数据的缺失值进行处理

(3) 请尝试使用不同的方法直接对整张表的缺失值进行处理

以下是举例:

df[df['Age']==None]=0df.head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

df[df['Age'].isnull()] =0# 还好df.head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

df[df['Age'] == np.nan] =0df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

【思考】检索空缺值用np.nan,None以及.isnull()哪个更好,这是为什么?如果其中某个方式无法找到缺失值,原因又是为什么?

【回答】数值列读取数据后,空缺值的数据类型为float64所以用None一般索引不到,比较的时候最好用np.nan

df.dropna().head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S

5000000.00000.000000

df.fillna(0).head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.25000S

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.92500S

【思考】dropna和fillna有哪些参数,分别如何使用呢?

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html

2.2 重复值观察与处理

由于这样那样的原因,数据中会不会存在重复值呢,如果存在要怎样处理呢

2.2.1 任务一:请查看数据中的重复值

df[df.duplicated()]

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

17000000.00000.000

19000000.00000.000

26000000.00000.000

28000000.00000.000

29000000.00000.000

.......................................

859000000.00000.000

863000000.00000.000

868000000.00000.000

878000000.00000.000

888000000.00000.000

176 rows × 12 columns

2.2.2 任务二:对重复值进行处理

(1)重复值有哪些处理方式呢?

(2)处理我们数据的重复值

方法多多益善

以下是对整个行有重复值的清理的方法举例:

df = df.drop_duplicates()df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS

2.2.3 任务三:将前面清洗的数据保存为csv格式

df.to_csv('test_clear.csv')

2.3 特征观察与处理

我们对特征进行一下观察,可以把特征大概分为两大类:

数值型特征:Survived ,Pclass, Age ,SibSp, Parch, Fare,其中Survived, Pclass为离散型数值特征,Age,SibSp, Parch, Fare为连续型数值特征

文本型特征:Name, Sex, Cabin,Embarked, Ticket,其中Sex, Cabin, Embarked, Ticket为类别型文本特征。

数值型特征一般可以直接用于模型的训练,但有时候为了模型的稳定性及鲁棒性会对连续变量进行离散化。文本型特征往往需要转换成数值型特征才能用于建模分析。

2.3.1 任务一:对年龄进行分箱(离散化)处理

(1) 分箱操作是什么?

(2) 将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示

(3) 将连续变量Age划分为(0,5] (5,15] (15,30] (30,50] (50,80]五个年龄段,并分别用类别变量12345表示

(4) 将连续变量Age按10% 30% 50% 70% 90%五个年龄段,并用分类变量12345表示

(5) 将上面的获得的数据分别进行保存,保存为csv格式

#将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示df['AgeBand'] = pd.cut(df['Age'],5,labels = [1,2,3,4,5])df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS2

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C3

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS2

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S3

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS3

df.to_csv('test_ave.csv')

#将连续变量Age划分为(0,5] (5,15] (15,30] (30,50] (50,80]五个年龄段,并分别用类别变量12345表示df['AgeBand'] = pd.cut(df['Age'],[0,5,15,30,50,80],labels = [1,2,3,4,5])df.head(3)

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS3

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C4

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS3

df.to_csv('test_cut.csv')

#将连续变量Age按10%30%5070%90%五个年龄段,并用分类变量12345表示df['AgeBand'] = pd.qcut(df['Age'],[0,0.1,0.3,0.5,0.7,0.9],labels = [1,2,3,4,5])df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBand

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS2

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C5

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS3

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S4

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS4

df.to_csv('test_pr.csv')

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html

【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html

2.3.2 任务二:对文本变量进行转换

(1) 查看文本变量名及种类

(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示

(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示

方法多多益善

#查看类别文本变量名及种类#方法一: value_countsdf['Sex'].value_counts()

male      453

female    261

0          1

Name: Sex, dtype: int64

df['Cabin'].value_counts()

G6            4

C23 C25 C27    4

B96 B98        4

F33            3

C22 C26        3

              ..

D37            1

C92            1

E58            1

E77            1

B4            1

Name: Cabin, Length: 135, dtype: int64

df['Embarked'].value_counts()

S    554

C    130

Q    28

0      1

Name: Embarked, dtype: int64

#方法二: uniquedf['Sex'].unique()

array(['male', 'female', 0], dtype=object)

df['Sex'].nunique()

3

#将类别文本转换为12345#方法一: replacedf['Sex_num'] = df['Sex'].replace(['male','female'],[1,2])df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_num

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41

#方法二: mapdf['Sex_num'] = df['Sex'].map({'male':1,'female':2})df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_num

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21.0

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52.0

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32.0

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42.0

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41.0

#方法三: 使用sklearn.preprocessing的LabelEncoderfrom sklearn.preprocessingimport LabelEncoderforfeatin['Cabin','Ticket']:    lbl =LabelEncoder()label_dict =dict(zip(df[feat].unique(),range(df[feat].nunique())))    df[feat +"_labelEncode"] = df[feat].map(label_dict)df[feat +"_labelEncode"] = lbl.fit_transform(df[feat].astype(str))df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedAgeBandSex_numCabin_labelEncodeTicket_labelEncode

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS21.0135409

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C52.074472

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS32.0135533

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S42.05041

4503Allen, Mr. William Henrymale35.0003734508.0500NaNS41.0135374

#将类别文本转换为one-hot编码#方法一: OneHotEncoderfor featin["Age","Embarked"]:#    x = pd.get_dummies(df["Age"] // 6)#    x = pd.get_dummies(pd.cut(df['Age'],5))x =pd.get_dummies(df[feat],prefix=feat)df =pd.concat([df, x],axis=1)#df[feat] = pd.get_dummies(df[feat], prefix=feat)df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Age_66.0Age_70.0Age_70.5Age_71.0Age_74.0Age_80.0Embarked_0Embarked_CEmbarked_QEmbarked_S

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...0000000001

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...0000000100

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...0000000001

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...0000000001

4503Allen, Mr. William Henrymale35.0003734508.0500...0000000001

5 rows × 109 columns

2.3.3 任务三(附加):从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)

df['Title'] = df.Name.str.extract('([A-Za-z]+)\.', expand=False)df.head()

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFare...Age_66.0Age_70.0Age_70.5Age_71.0Age_74.0Age_80.0Embarked_CEmbarked_QEmbarked_STitle

0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500...000000001Mr

1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833...000000100Mrs

2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250...000000001Miss

3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000...000000001Mrs

4503Allen, Mr. William Henrymale35.0003734508.0500...000000001Mr

5 rows × 108 columns

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容