背景:在糗事百科的爬虫案例中,我们是自己在解析完整个页面后获取到下一页的url,然后重新发送一个请求。有时候我们想要这样做,只要满足某个条件的url,都给我进行爬取,那么这时候我们就可以通过CrawlSpider来帮我们完成。CrawlSpider继承自Spider,只不过是在之前的基础之上增加了新的功能。
1、创建CrawlSpider爬虫
之前创建爬虫的方式是通过
scrapy genspider [爬虫名字] [域名]
的方式创建的,如果想要创建CrawlSpider爬虫,那么应该通过命令scrapy genspider -t crawl [爬虫名字] [域名]
创建。
2、LinkExtractors链接提取器
使用
LinkExtractors
可以不用程序员自己提取想要的url,然后发送请求,这些工作都可以交给LinkExtractors
,他会在所有爬的页面中找到满足规则的url,实现自动爬取。
class LxmlLinkExtractor(
allow=(),
deny=(),
allow_domains=(),
deny_domains=(),
restrict_xpaths=(),
tags=('a', 'area'),
attrs=('href',),
canonicalize=False,
unique=True,
process_value=None,
deny_extensions=None,
restrict_css=(),
strip=True,
restrict_text=None
)
主要参数说明:
1)allow:允许的url,所有满足这个正则表达式的url都会被提取;
2)deny:禁止的url,所有满足这个正则表达式的url都不会被提取;
3)allow_domains:允许的域名,只有在这个里面指定的域名的url才会被提取;
4)deny_domains:禁止的域名,所有在这个里面指定的域名的url都不会被提取;
5)restrict_xpaths:严格的xpath,和allow共同过滤链接。
3、Rule规则类
定义爬虫的规则类。
class Rule(
link_extractor=None,
callback=None,
cb_kwargs=None,
follow=None,
process_links=None,
process_request=None,
errback=None
)
主要参数说明:
1)link_extractor:一个LinkExtractor
对象,用于定义爬取规则;
2)callback:满足这个规则的url,应该要执行哪个回调函数,因为CrawlSpider
使用了parse
作为回调函数,因此不要覆盖parse
,用自己的回调函数作为回调函数;
3)follow:指定根据该规则从response中提取的链接是否需要跟进;
4)process_links:从link_extractor中获取到链接后会传递给这个函数,用来过滤不需要爬取的链接。
4、爬虫实例(微信小程序社区)
4.1、创建项目
D:\学习笔记\Python学习\Python_Crawler>scrapy startproject wxapp
New Scrapy project 'wxapp', using template directory 'c:\python38\lib\site-packages\scrapy\templates\project', created in:
D:\学习笔记\Python学习\Python_Crawler\wxapp
You can start your first spider with:
cd wxapp
scrapy genspider example example.com
4.2、创建爬虫
D:\学习笔记\Python学习\Python_Crawler>cd wxapp
D:\学习笔记\Python学习\Python_Crawler\wxapp>scrapy genspider -t crawl wxappSpider "wxapp-union.com"
Created spider 'wxappSpider' using template 'crawl' in module:
wxapp.spiders.wxappSpider
spiders/init.py文件初始内容:
# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
spiders/wxappSpider.py文件初始内容:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class WxappspiderSpider(CrawlSpider):
name = 'wxappSpider'
allowed_domains = ['wxapp-union.com']
start_urls = ['http://wxapp-union.com/']
rules = (
Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
)
def parse_item(self, response):
item = {}
#item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()
#item['name'] = response.xpath('//div[@id="name"]').get()
#item['description'] = response.xpath('//div[@id="description"]').get()
return item
items.py文件初始内容:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class WxappItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
middlewares.py文件初始内容:
# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class WxappSpiderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider.
# Should return None or raise an exception.
return None
def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
# Must return an iterable of Request, dict or Item objects.
for i in result:
yield i
def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception.
# Should return either None or an iterable of Request, dict
# or Item objects.
pass
def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
class WxappDownloaderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware.
# Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
# Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
pipelines.py文件初始内容:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class WxappPipeline:
def process_item(self, item, spider):
return item
settings.py文件初始内容:
# -*- coding: utf-8 -*-
# Scrapy settings for wxapp project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'wxapp'
SPIDER_MODULES = ['wxapp.spiders']
NEWSPIDER_MODULE = 'wxapp.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'wxapp (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'wxapp.middlewares.WxappSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'wxapp.middlewares.WxappDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'wxapp.pipelines.WxappPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
scrapy.cfg文件初始内容:
# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html
[settings]
default = wxapp.settings
[deploy]
#url = http://localhost:6800/
project = wxapp
4.3、代码实现
需要使用
LinkExtractor
和Rule
,这两个类决定了爬虫的具体走向。
1)allow设置规则的方法:要能够限制在我们的想要的url上面,不要跟其他的url产生相同的正则表达式即可;
2)什么情况下使用follow:如果在爬取页面的时候,需要将满足当前条件的url再进行跟进,那么就设置为True,否则设置为False;
3)什么情况下该指定callback:如果这个url对应的页面,只是为了获取更多的url,并不需要里面的数据,那么可以不指定callback。
A)settings.py
设置
ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 1
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.9 Safari/537.36',
}
ITEM_PIPELINES = {
'qiushibk.pipelines.QiushibkPipeline': 300,
}
B)创建start.py
文件
&esmp;在项目的根目录下创建start.py
文件,并编写代码。
from scrapy import cmdline
cmdline.execute("scrapy crawl wxappSpider".split())
C)wxappSpider.py
代码如下:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from wxapp.items import WxappItem
class WxappspiderSpider(CrawlSpider):
name = 'wxappSpider'
allowed_domains = ['wxapp-union.com']
start_urls = ['http://www.wxapp-union.com/portal.php?mod=list&catid=2&page=1']
rules = (
Rule(LinkExtractor(allow=r'.+mod=list&catid=2&page=\d'), follow=True),
Rule(LinkExtractor(allow=r'.+article-.+\.html'), callback='parse_item', follow=False)
)
def parse_item(self, response):
title = response.xpath(r"//h1[@class='ph']/text()").get().strip()
p = response.xpath(r"//p[@class='authors']")
author = p.xpath(r".//a/text()").get().strip()
pubTime = p.xpath(r".//span/text()").get().strip()
articleContent = response.xpath(r"//td[@id='article_content']//text()").getall()
content = "".join(articleContent).strip()
item = WxappItem(title=title, author=author, pubTime=pubTime, content=content)
yield item
D)items.py
代码如下:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class WxappItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
author = scrapy.Field()
pubTime = scrapy.Field()
content = scrapy.Field()
E)pipelines.py
代码如下:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.exporters import JsonLinesItemExporter
class WxappPipeline:
def __init__(self):
self.fp = open('wxjc.json', 'wb')
self.exporter = JsonLinesItemExporter(self.fp, ensure_ascii=False, encoding='utf-8')
def process_item(self, item, spider):
self.exporter.export_item(item)
return item
def close_spider(self):
self.fp.close()