接上一章,我们来到第三关,地址:http://www.heibanke.com/lesson/crawler_ex02/,提示需要登录,那就先注册个账号登录,登录后页面如图:
看起来和第二关差不多,不过多了一句话:“比上一关多了两层保护”,看来就是在第二关的基础上加了两层限制,不管那么多,直接把第二关的爬虫代码修改下url(http://www.heibanke.com/lesson/crawler_ex02/)运行试试看,提示403错误
urllib.error.HTTPError: HTTP Error 403: FORBIDDEN
看来是一个登录cookie验证,F12打开调试工具,查看Network,显示如下图:
既然猜测是登录验证,那就加上Cookie试试,
header = {
'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
'Connection': 'keep-alive',
'Cookie':r'Hm_lvt_74e694103cf02b31b28db0a346da0b6b=1514366315; csrftoken=VDdjKqyv39hMDXMaUW5SMkDAGRF1y85m; sessionid=0fd2tziqn8jhuzuxl5lramgd0swfb2wm; Hm_lpvt_74e694103cf02b31b28db0a346da0b6b=1514427240',
'Refer':'http://www.heibanke.com/lesson/crawler_ex02/'
}
req = request.Request(url, data)
依然403,仔细对比下参数,发现csrfmiddlewaretoken参数的值变了,于是复制下网页上的token到代码里,再次运行,成功,结果如图:
去网页上试试,昵称随便输一个,密码输入上面获取的结果:13,搞定
所有代码:
from urllib import request
from urllib import parse
from bs4 import BeautifulSoup
def get_page(url, params):
print('get url %s' % url)
data = parse.urlencode(params).encode('utf-8')
header = {
'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
'Connection': 'keep-alive',
'Cookie':r'Hm_lvt_74e694103cf02b31b28db0a346da0b6b=1514366315; csrftoken=1yFgXVZtw2rACmTYDGABYKs9VWLWqbeH; sessionid=m4paft1uuvhm3thrwvdgwut2rvu8uz8d; Hm_lpvt_74e694103cf02b31b28db0a346da0b6b=1514428404',
'Refer':'http://www.heibanke.com/lesson/crawler_ex02/'
}
req = request.Request(url, data, headers=header)
page = request.urlopen(req).read()
page = page.decode('utf-8')
return page
count = 0
url = "http://www.heibanke.com/lesson/crawler_ex02/"
token = '1yFgXVZtw2rACmTYDGABYKs9VWLWqbeH'
username = 'pkxutao'
password = -1
# 构造post参数
data = {
'csrfmiddlewaretoken': token,
'username': 'pkxutao',
'password': password
}
result = '您输入的密码错误, 请重新输入'
while result == '您输入的密码错误, 请重新输入':
count += 1
password += 1
data['password'] = password
print('第%d次尝试,参数:%d' % (count, password))
result = get_page(url, data)
soup = BeautifulSoup(result, "html.parser")
# 解析h3元素
h3 = soup.find_all("h3")[0]
result = soup.find_all("h3")[0].text
print('成功,username:%s, password:%d' % (username, password))
总结
这一关相对于上一关多了两层保护,作者说的很明显,加上这一关必须登录,所以很容易猜测出其中一层保护是Cookie验证。我在添加Cookie后测试了好几次还是403,就一直在找第二层保护是什么。通过fiddler抓包对比网页请求和爬虫请求的参数,发现除了网页请求的header里面多了一些参数外,就是body参数csrfmiddlewaretoken不一样了,把csrfmiddlewaretoken的值搞成一样的测试,就过关了,还是要心细和多测试。