目标:根据输入的关键词和页数,爬取百度图片
思路参考:http://www.cnblogs.com/voidsky/p/5490800.html
#-*- coding:utf-8 -*-
import re
import requests
def get_page(page, keyword):
url = 'http://image.baidu.com/search/flip?tn=baiduimage&ie=utf-8&word=' + keyword +'&ct=201326592&v=flip'
html = requests.get(url).text
pic_url = re.findall('"objURL":"(.*?)"',html,re.S)
print 'here we go'
urls = []
while len(urls) < page:
next_page = re.search('<a href="(.*?)" class="n">下一页</a>',html).group(1).encode('utf-8').replace('&','')
next_page_full = 'http://image.baidu.com' + next_page
urls.append(next_page_full)
i = 0
for each in urls:
print '网页链接'
print each
pic_html = requests.get(each, timeout = 10).text
pic_urls = re.findall('"objURL":"(.*?)",', pic_html)
print '图片链接'
for each_pic in pic_urls:
print each_pic
try:
pic = requests.get(each_pic,timeout = 10)
string = 'E:\\pythonExercises\\20160607\\'+str(i) + '.jpg'
fp = open(string,'wb')
fp.write(pic.content)
fp.close()
i += 1
except requests.exceptions.ConnectionError:
print '【错误】当前图片无法下载'
continue
keyword = raw_input("Input key word: ")
page = int(raw_input("Input page: "))
get_page(page, keyword)
关键:
1.找到这个正则"objURL":"(.*?)",
2.re.search('下一页',html).group(1)
3.定位文件夹输出:string = 'E:\pythonExercises\20160607\'+str(i) + '.jpg'
遇到的问题:
1.sublime可以run:
用Sublime Text2运行python代码:
当用Sublime Text2写完代码之后通常都需要运行一下看看是否有错误或者说代码是否正常;
如果用LDIE的时候直接按F5就可以开始了、但是在Sublim Text2下 需要额外注意
1、设置环境变量、添加python到环境变量中;
2、Sublime Text2要运行的代码路径不能为中文、否则不能挑食;
3、以上2个条件满足之后,在写完代码后直接按Ctrl+B 就可以调试了。
2.sublime不能输入
解决参考:
http://blog.csdn.net/bravelee2009/article/details/9364737
http://www.unicac.cn/share/Sublime-Package-Control.html