1.定义Item
Item
是保存爬取到的数据的容器;其使用方法和 python 字典类似。
您可以通过创建一个 scrapy.Item
类, 并且定义类型为 scrapy.Field
的类属性来定义一个 Item
。
本例中,我们将从 http://www.dmoz.org/ 中获取标题(title),网址(link),以及网站的描述(desc)。 对此,在 item
中定义相应的字段。
编辑 tutorial
目录中的 items.py
文件:
import scrapy
# 定义了 title、link、desc 三个 item
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
2.编写第一个爬虫(Spider)
在 tutorial/spiders
目录下新建一个 Python 文件,命名为 dmoz_spider.py
,编辑该文件:
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2]
with open(filename, 'wb') as f:
f.write(response.body)
name
: 用于区别 Spider。 该名字必须是唯一的,您不可以为不同的 Spider 设定相同的名字。start_urls
:包含了 Spider 在启动时进行爬取的 url 列表。 因此,第一个被获取到的页面将是其中之一。 后续的 url 则从初始的 url 获取到的数据中提取。parse()
:是 spider 的一个方法。 被调用时,每个初始 URL 完成下载后生成的 Response 对象将会作为唯一的参数传递给该函数。该方法负责解析返回的数据(response data),提取数据(生成 item)以及生成需要进一步处理的 URL 的 Request 对象。
3.爬取
进入项目的根目录,执行下列命令启动 spider
:
scrapy crawl dmoz
crawl dmoz
启动用于爬取 dmoz.org 的 spider,您将得到类似的输出:
[Anaconda2] E:\tutorial>scrapy crawl dmoz
2016-07-30 21:39:42+0800 [scrapy] INFO: Scrapy 0.24.4 started (bot: tutorial)
2016-07-30 21:39:42+0800 [scrapy] INFO: Optional features available: ssl, http11, boto
2016-07-30 21:39:42+0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2016-07-30 21:39:44+0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2016-07-30 21:39:48+0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-07-30 21:39:49+0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-07-30 21:39:49+0800 [scrapy] INFO: Enabled item pipelines:
2016-07-30 21:39:49+0800 [dmoz] INFO: Spider opened
2016-07-30 21:39:49+0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-07-30 21:39:49+0800 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-07-30 21:39:49+0800 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2016-07-30 21:39:55+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2016-07-30 21:39:55+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2016-07-30 21:39:55+0800 [dmoz] INFO: Closing spider (finished)
2016-07-30 21:39:55+0800 [dmoz] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 516,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 16392,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 7, 30, 13, 39, 55, 745000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 7, 30, 13, 39, 49, 234000)}
2016-07-30 21:39:55+0800 [dmoz] INFO: Spider closed (finished)
Scrapy 为 Spider 的 start_urls
属性中的每个 URL 创建了 scrapy.Request
对象,并将 parse
方法作为回调函数(callback)赋值给了 Request
。
Request
对象经过调度,执行生成 scrapy.http.Response
对象并送回给 spider parse()
方法。
4.提取Item
Scrapy 使用了一种基于 XPath 和 CSS 表达式机制来提取网页中的内容。
现在,我们尝试在 Shell 中使用 Selector
选择器,进入项目的根目录,执行下列命令来启动 shell:
scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
注意:当您在终端运行 Scrapy 时,请一定记得给 url 地址加上引号,否则包含参数的 url (例如 & 字符)会导致 Scrapy 运行失败。
进入shell之后:
在 shell 中,你会看到这样的输出:
[ ... Scrapy log here ... ]
2016-07-30 22:40:00+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
[s] Available Scrapy objects:
[s] crawler <scrapy.crawler.Crawler object at 0x044CEB50>
[s] item {}
[s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/>
[s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/>
[s] settings <scrapy.settings.Settings object at 0x03634810>
[s] spider <DmozSpider 'dmoz' at 0x48be330>
[s] Useful shortcuts:
[s] shelp() Shell help (print this help)
[s] fetch(req_or_url) Fetch request (or URL) and update local objects
[s] view(response) View response in a browser
In [1]:
当 shell 载入后,您将得到一个包含 response 数据的本地 response 变量。
输入 response.body
将输出 response 的包体,输入 response.headers
可以看到 response 的包头:
In [1]: response.headers
Out[1]:
{'Content-Language': 'en',
'Content-Type': 'text/html;charset=UTF-8',
'Cteonnt-Length': '52225',
'Date': 'Sat, 30 Jul 2016 14:55:10 GMT',
'Server': 'Apache',
'Set-Cookie': 'JSESSIONID=A98E722CA05AE195806DB0E5E79F64E4; Path=/; HttpOnly'}
Selector 有四个基本的方法:
-
xpath()
:传入 xpath 表达式,返回该表达式所对应的所有节点的 selector list 列表 。 -
css()
:传入 CSS 表达式,返回该表达式所对应的所有节点的 selector list 列表. -
extract()
:序列化该节点为 unicode 字符串并返回 list。 -
re()
:根据传入的正则表达式对数据进行提取,返回 unicode 字符串 list 列表。
现在我们在 shell 中试试:
In [3]: response.xpath('//title')
Out[3]: [<Selector xpath='//title' data=u'<title>DMOZ - Computers: Programming: La'>]
In [4]: response.xpath('//title').extract()
Out[4]: [u'<title>DMOZ - Computers: Programming: Languages: Python: Books</title>']
In [5]: response.xpath('//title/text()')
Out[5]: [<Selector xpath='//title/text()' data=u'DMOZ - Computers: Programming: Languages'>]
In [6]: response.xpath('//title/text()').extract()
Out[6]: [u'DMOZ - Computers: Programming: Languages: Python: Books']
In [7]: response.xpath('//title/text()').re('(\w+):')
Out[7]: [u'Computers', u'Programming', u'Languages', u'Python']
5.提取数据
我们可以通过这段代码选择该页面中网站列表里所有 <li>
元素:
response.xpath('//ul/li')
输出结果:
In [1]: response.xpath('//ul/li')
Out[1]:
[<Selector xpath='//ul/li' data=u'<li> <a href="/docs/en/about.html"> '>,
<Selector xpath='//ul/li' data=u'<li> <a href="/docs/en/help/become.html"'>,
<Selector xpath='//ul/li' data=u'<li> <a href="/docs/en/add.html"> '>,
<Selector xpath='//ul/li' data=u'<li> <a href="/docs/en/help/helpmain.htm'>,
<Selector xpath='//ul/li' data=u'<li> <a href="/editors/"> Login '>,
<Selector xpath='//ul/li' data=u'<li class="social-link" onclick="share(\''>,
<Selector xpath='//ul/li' data=u'<li class="social-link" onclick="share(\''>,
<Selector xpath='//ul/li' data=u'<li class="social-link" onclick="share(\''>,
<Selector xpath='//ul/li' data=u'<li class="social-link" onclick="share(\''>,
<Selector xpath='//ul/li' data=u'<li> <span><a class="social-link" target'>,
<Selector xpath='//ul/li' data=u'<li> <span><a class="social-link" target'>]
抓取网站标题栏的内容:
response.xpath('//ul/li/a/text()').extract()
输出内容:
In [2]: response.xpath('//ul/li/a/text()').extract()
Out[2]:
[u' About ',
u' Become an Editor ',
u' Suggest a Site ',
u' Help ',
u' Login ']
抓取标题栏的链接:
response.xpath('//ul/li/a/@href').extract()
输出内容:
In [3]: response.xpath('//ul/li/a/@href').extract()
Out[3]:
[u'/docs/en/about.html',
u'/docs/en/help/become.html',
u'/docs/en/add.html',
u'/docs/en/help/helpmain.html',
u'/editors/']
可以看到,每个 .xpath()
调用返回 selector 组成的 list,因此我们可以拼接更多的 .xpath()
然后用遍历的方法来进一步获取某个节点。我们将在下边使用这样的特性:
修改 spiders/dmoz_spider.py
文件:
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
title = sel.xpath('a/text()').extract()
link = sel.xpath('a/@href').extract()
desc = sel.xpath('text()').extract()
print title, link, desc
再次启动该 Scrapy 项目:
scrapy crawl dmoz
您将看到爬取到的网站信息被成功输出:
2016-08-01 10:06:48+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
[u' About '] [u'/docs/en/about.html'] [u' ', u' ']
[u' Become an Editor '] [u'/docs/en/help/become.html'] [u' ', u' ']
[u' Suggest a Site '] [u'/docs/en/add.html'] [u' ', u' ']
[u' Help '] [u'/docs/en/help/helpmain.html'] [u' ', u' ']
[u' Login '] [u'/editors/'] [u' ', u' ']
[] [] [u' ', u' Share via Facebook ']
[] [] [u' ', u' Share via Twitter ']
[] [] [u' ', u' Share via LinkedIn ']
[] [] [u' ', u' Share via e-Mail ']
[] [] [u' ', u' ']
[] [] [u' ', u' ']
2016-08-01 10:06:48+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
[u' About '] [u'/docs/en/about.html'] [u' ', u' ']
[u' Become an Editor '] [u'/docs/en/help/become.html'] [u' ', u' ']
[u' Suggest a Site '] [u'/docs/en/add.html'] [u' ', u' ']
[u' Help '] [u'/docs/en/help/helpmain.html'] [u' ', u' ']
[u' Login '] [u'/editors/'] [u' ', u' ']
[] [] [u' ', u' Share via Facebook ']
[] [] [u' ', u' Share via Twitter ']
[] [] [u' ', u' Share via LinkedIn ']
[] [] [u' ', u' Share via e-Mail ']
[] [] [u' ', u' ']
[] [] [u' ', u' ']
2016-08-01 10:06:48+0800 [dmoz] INFO: Closing spider (finished)
6. 使用item
Item 对象是自定义的 python 字典。您可以使用标准的字典语法来获取到其每个字段的值。(字段即是我们之前用Field赋值的属性):
>>> item = DmozItem()
>>> item['title'] = 'Example title'
>>> item['title']'Example title'
再次修改 spiders/dmoz_spider.py
文件:
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
现在对 dmoz.org 进行爬取将会产生 DmozItem 对象。
7.保存爬取到的数据
最简单存储爬取的数据的方式是使用 Feed exports:
scrapy crawl dmoz -o items.json
该命令将采用 JSON 格式对爬取的数据进行序列化,生成 items.json 文件。