上次说到了pandas中常用的一些选择以及切片的方法,总结一下,主要有以下几种:
data['A’]
data[['A','B’]]
data.iloc[:,1] #截取第二列数据,iloc只能用数字截取, Select row by integer location
data.loc[2] #截取index label为2的第二行数据,loc只能用label来截取,Select row by label
data.loc[:,'A':'B’] #截取截取从header为’A’到’B’的列
data.ix[:, ‘A’:'B’] #截取从header为’A’到’B’的列
data.ix[:,0:2] #截取前两列数据
有一点要注意:
In pandas version 0.20.0 and above, ix is deprecated and the use of loc and iloc is encouraged instead.
在pandas最新的documentation里,loc和iloc是比较推荐使用的,在stack overflow上有个问题专门问了这三种截取方法的区别,这里引用一下一遍我们深入了解他们的区别:
loc works on labels in the index.
iloc works on the positions in the index (so it only takes integers).
ix usually tries to behave like loc but falls back to behaving like iloc if the label is not in the index.
举个栗子,如果一个data frame, 它的index label是mixed type,既包含数字类型,又包含文本类型,那么ix既可用位置数字去截取,也可以用label去截取, 但是一定要记住,ix是优先label的,如果label不存在,就会用位置数字去截取 (仅仅在mixed type下适用,一旦我们的label是固定类型,那么ix和loc的作用就完全一样了)
接下来我想说一说数据过滤的问题, 这个问题也是我在数据预处理的时候经常遇到的问题,我们先来创建一个数据表, 包含一些城市的信息
import pandas as pd
data = pd.read_excel('rhythm.xlsx')
print data
Out[23]:
A B C D city house_price \
0 1 2 3 4 Beijing 70000
1 2 2 3 4 Shanghai 120000
2 3 2 3 4 NaN 5000
3 4 2 3 4 New York 140000
4 5 2 3 4 Brasilia 50000
5 6 2 3 4 Atlanta 20000
6 7 2 3 4 Tokyo 130000
7 8 2 3 4 #NAME 30000
information
0 page not found
1 Shanghai is a Chinese city located on the east...
2 Kunming is also called the Spring city due to ...
3 New York is a state in the northeastern United...
4 Brasília (Portuguese pronunciation: [bɾaˈziljɐ...
5 404 not found
6 Tokyo (Japanese: [toːkjoː] ( listen), English ...
7 Mumbai (/mʊmˈbaɪ/; also known as Bombay, the o...
我们可以看到,这个dataset有很多问题,比如city列有NaN数据缺失的问题,information列有’page not found’的错误信息,我们要怎么剔除这些信息呢?
第一问:如果我想把city列含有NaN的行去掉,要怎么做呢?
drop_nan = data.dropna(subset=['city'])
print drop_nan
output:
A B C D city house price \
0 1 2 3 4 Beijing 70000
1 2 2 3 4 Shanghai 120000
3 4 2 3 4 New York 140000
4 5 2 3 4 Brasilia 50000
5 6 2 3 4 Atlanta 20000
6 7 2 3 4 Tokyo 130000
information
0 page not found
1 Shanghai is a Chinese city located on the east...
3 New York is a state in the northeastern United...
4 Brasília (Portuguese pronunciation: [bɾaˈziljɐ...
5 404 not found
6 Tokyo (Japanese: [toːkjoː] ( listen), English ...
可以看到,我们city缺失的那一行已经完全消失了,dropna就是专门用于过滤空值的函数,一般来说它有两种形式:
data.dropna(how='any') #to drop if any value in the row has a nan 只要任何一列包含有na值,就会完全删除那一行
data.dropna(how='all') #to drop if all values in the row are nan 如果所有列的值都是na,才删除那一行
在这里,需要注意行和列的区别
第二问:我想把东京和亚特兰大那两行删掉
drop_value = data[data.city.str.contains('Tokyo', 'Atlanta') == False]
#or
drop_value = data[~data.city.str.contains('Tokyo', 'Atlanta',na=False)]
output:
A B C D city house price \
0 1 2 3 4 Beijing 70000
1 2 2 3 4 Shanghai 120000
2 3 2 3 4 NaN 5000
3 4 2 3 4 New York 140000
4 5 2 3 4 Brasilia 50000
information
0 page not found
1 Shanghai is a Chinese city located on the east...
2 Kunming is also called the Spring city due to ...
3 New York is a state in the northeastern United...
4 Brasília (Portuguese pronunciation: [bɾaˈziljɐ...
第三问,我想把house price 大于等于70000小于等于14000的行提取出来:
在做这一步之前,有个很重要的问题需要我们先解决了,仔细观察我们这个数据集,就会发现“house price”这个header是包含有空格的,这对我们之后的数据处理会造成很大麻烦,所以我们需要先把这个空格替换为”_"
data.columns = [c.replace(' ', '_') for c in data.columns]
print data.columns.values
output:
[u'A' u'B' u'C' u'D' u'city' u'house_price' u'information']
接下来就可以进行数字的范围截取了:
select_range = data[(data.house_price >= 70000) & (data.house_price <= 140000)]
output:
A B C D city house_price \
0 1 2 3 4 Beijing 70000
1 2 2 3 4 Shanghai 120000
3 4 2 3 4 New York 140000
6 7 2 3 4 Tokyo 130000
information
0 page not found
1 Shanghai is a Chinese city located on the east...
3 New York is a state in the northeastern United...
6 Tokyo (Japanese: [toːkjoː] ( listen), English ...
第四问:我想把information那一列中,包含“page not found”, “404 not found”这些文本的行删除,并且,如果city一列中,开头的第一个文本是符号”#",那么也将这一行删除
ignore_list = ['404 not found','page not found']
remove_start_with = data[~data['city'].str.startswith('#', na=False)
& ~data['information'].str.contains('|'.join(ignore_list), na=True)]
print remove_start_with
A B C D city house_price \
1 2 2 3 4 Shanghai 120000
2 3 2 3 4 NaN 5000
3 4 2 3 4 New York 140000
4 5 2 3 4 Brasilia 50000
6 7 2 3 4 Tokyo 130000
information
1 Shanghai is a Chinese city located on the east...
2 Kunming is also called the Spring city due to ...
3 New York is a state in the northeastern United...
4 Brasília (Portuguese pronunciation: [bɾaˈziljɐ...
6 Tokyo (Japanese: [toːkjoː] ( listen), English ...
第五问,我想把截取information文本长度大于30个词的行
len_filter = data[data.information.str.len() >= 30]
print len_filter
从上面的各种例子来看,我们就会发现,pandas在截取数据,过滤数据的时候功能强大,且写出的代码可读性极高。我会继续复习pandas相关的知识,在下一篇文章中说谈谈如何用pandas内联结两个数据集。