你是无意穿堂风 偏偏孤倨引山洪
爬虫引发的问题
网络爬虫的规模:
爬虫带来的问题:
- “骚扰”服务器
- 法律风险
- 隐私泄露
如:
网络爬虫的限制:
Robots协议
拓展查看:
http://www.baidu.com/robots.txt
http://www.qq.com/robots.txt
Robots协议的使用:
实例1 爬京东商品页面
目标url:
https://item.jd.com/4099139.html
# -*- coding: utf-8 -*-
import requests
url = "https://item.jd.com/4099139.html"
try:
r = requests.get(url)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[:1000])
except:
print("爬取失败")
实例2 爬亚马逊商品
目标url:
https://www.amazon.cn/gp/product/B01M8L5Z3Y
# -*- coding: utf-8 -*-
import requests
url = "https://www.amazon.cn/gp/product/B01M8L5Z3Y"
r = requests.get(url)
print(r.status_code)
print(r.encoding)
r.encoding = r.apparent_encoding
print(r.request.headers)
print(r.text)
kv = {'user-agent':'Mozilla/5.0'}
r = requests.get(url, headers=kv)
print(r.status_code)
print(r.request.headers)
print(r.text[:1000])
# -*- coding: utf-8 -*-
import requests
url = "https://www.amazon.cn/gp/product/B01M8L5Z3Y"
try:
kv = {'user-agent':'Mozilla/5.0'}
r = requests.get(url, headers=kv)
print(r.request.headers)
print(r.status_code)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.text[:1000])
except:
print("Error")
实例3 百度/360搜索关键字提交
百度搜索:
# -*- coding: utf-8 -*-
import requests
keyword = "Python"
try:
kv = {'wd':keyword}
r = requests.get("http://www.baidu.com/s", params=kv)
print(r.status_code)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("Error")
360 搜索:(一般不用)
# -*- coding: utf-8 -*-
import requests
keyword = "Python"
try:
kv = {'q':keyword}
r = requests.get("http://www.so.com/s", params=kv)
print(r.status_code)
print(r.request.url)
r.raise_for_status()
print(len(r.text))
except:
print("Error")
实例4 爬取和存储网络图片
目标图片url:
http://image.nationalgeographic.com.cn/2017/0930/20170930020707632.jpg
目标图片:
# -*- coding: utf-8 -*-
import requests
import os
url = "http://image.nationalgeographic.com.cn/2017/0930/20170930020707632.jpg"
root = "D://pics//"
path = root + url.split('/')[-1]
try:
if not os.path.exists(root):
os.mkdir(root)
if not os.path.exists(path):
r = requests.get(url)
print(r.status_code)
with open(path, 'wb') as f:
f.write(r.content)
f.close()
print("文件保存成功")
else:
print("文件已存在")
except:
print("爬取失败")
代码第一次运行:
运行结果:
第二次运行失败:
实例5 IP地址归属地的自动查询
# -*- coding: utf-8 -*-
import requests
url = "http://www.ip138.com/ips1388.asp?ip="
try:
r = requests.get(url+"59.66.0.0&action=2")
r.raise_for_status()
r.encoding = r.apparent_encoding
print(r.status_code)
print(r.text)
except:
print("爬取失败")
世界上所有的追求都是因为热爱
一枚爱编码 爱生活 爱分享的IT信徒
— hongXkeX