爬虫笔记(三) - 关于Scrapy去重逻辑分析

默认的去重逻辑

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2017/5/7 22:43
# @Author  : Aries
# @File    : scrapy_filter.py
# @Software: PyCharm

import scrapy
from scrapy.http.request import Request


class FilterSpider(scrapy.Spider):
    name = 'filter'
    headers = {
        'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36", }

    def start_requests(self):
        yield Request(url='https://www.baidu.com/s?wd=22', headers=self.headers)

    def parse_print(self, response):
        self.logger.info(response.url)

    def parse(self, response):
        self.logger.info("--------------------------")
        yield Request(url='https://www.baidu.com/s?wd=1', callback=self.parse_print, headers=self.headers)
        yield Request(url='https://www.baidu.com/s?wd=3', callback=self.parse_print, headers=self.headers)
        yield Request(url='https://www.baidu.com/s?wd=3', callback=self.parse_print, headers=self.headers)

        
# 运行结果如下
2017-05-07 23:33:36 [scrapy.core.engine] INFO: Spider opened
2017-05-07 23:33:36 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-07 23:33:36 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-07 23:33:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=22> (referer: None)
2017-05-07 23:33:37 [filter] INFO: --------------------------
2017-05-07 23:33:37 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.baidu.com/s?wd=3> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2017-05-07 23:33:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=3> (referer: https://www.baidu.com/s?wd=22)
2017-05-07 23:33:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=1> (referer: https://www.baidu.com/s?wd=22)
2017-05-07 23:33:37 [filter] INFO: https://www.baidu.com/s?wd=3
2017-05-07 23:33:37 [filter] INFO: https://www.baidu.com/s?wd=1
2017-05-07 23:33:37 [scrapy.core.engine] INFO: Closing spider (finished)
2017-05-07 23:33:37 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
# 从运行结果中看到:两个https://www.baidu.com/s?wd=3 的请求有一个被默认的去重逻辑处理掉了
2017-05-07 23:33:37 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.baidu.com/s?wd=3> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)

默认去重逻辑的瓶颈

默认去重: 'scrapy.dupefilters.RFPDupeFilter'
通过request_fingerprint将request中传递过来的url做类似指纹认证。
如果指纹已经存在,则丢弃返回true
如果指纹不存在,则add加入到fingerprints中(此处self.fingerprints = set())

class RFPDupeFilter(BaseDupeFilter):
    def request_seen(self, request):
        fp = self.request_fingerprint(request)
        if fp in self.fingerprints:
            return True
        self.fingerprints.add(fp)
        if self.file:
            self.file.write(fp + os.linesep)

按照这个去重处理逻辑,默认情况下法实现如下情况:

解决方法:

  • 以例子二为例:通过mate将wd的传值到request_fingerprint的函数中,将URL返回,让filter生成指纹
  • 例子一的解决方法类似,将日期传递给request_fingerprint。重新返回对应的URL就可以了(return response + "--" + datetime)

第一步:重写RFPDupeFilter

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2017/5/7 23:17
# @Author  : Aries
# @File    : custom_filter.py
# @Software: PyCharm

from scrapy.dupefilters import RFPDupeFilter


class CustomURLFilter(RFPDupeFilter):
    def request_fingerprint(self, request):

        if "wd" in request.meta:
            return "https://www.baidu.com/s" + "--" + request.meta["wd"]
        else:
            return request.url

第二步:启用custom_settings并设置好对应的meta

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2017/5/7 22:43
# @Author  : Aries
# @File    : scrapy_filter.py
# @Software: PyCharm

import scrapy
from scrapy.http.request import Request


class FilterSpider(scrapy.Spider):
    name = 'filter'
    headers = {
        'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36", }
    custom_settings = {
        'DUPEFILTER_DEBUG': True,
        'DUPEFILTER_CLASS': "lagou.custom_filter.CustomURLFilter"
    }

    def start_requests(self):
        yield Request(url='https://www.baidu.com/s?wd=22', headers=self.headers, meta={"wd": "22"})

    def parse_print(self, response):
        self.logger.info(response.url)

    def parse(self, response):
        self.logger.info("--------------------------")
        yield Request(url='https://www.baidu.com/s?wd=1', callback=self.parse_print, headers=self.headers, meta={"wd": "1"})
        yield Request(url='https://www.baidu.com/s?wd=3&s=1', callback=self.parse_print, headers=self.headers, meta={"wd": "3"})
        yield Request(url='https://www.baidu.com/s?wd=3&s=2', callback=self.parse_print, headers=self.headers, meta={"wd": "3"})

# 运行结果如下
2017-05-07 23:31:14 [scrapy.core.engine] INFO: Spider opened
2017-05-07 23:31:14 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-07 23:31:14 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-07 23:31:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=22> (referer: None)
2017-05-07 23:31:14 [filter] INFO: --------------------------
2017-05-07 23:31:14 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.baidu.com/s?wd=3&s=2>
2017-05-07 23:31:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=3&s=1> (referer: https://www.baidu.com/s?wd=22)
2017-05-07 23:31:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=1> (referer: https://www.baidu.com/s?wd=22)
2017-05-07 23:31:14 [filter] INFO: https://www.baidu.com/s?wd=3&s=1
2017-05-07 23:31:15 [filter] INFO: https://www.baidu.com/s?wd=1
2017-05-07 23:31:15 [scrapy.core.engine] INFO: Closing spider (finished)
2017-05-07 23:31:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

# 从运行结果中看到:两个https://www.baidu.com/s?wd=3&s=2 的请求被处理掉了
2017-05-07 23:31:14 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.baidu.com/s?wd=3&s=2>
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • Spring Cloud为开发人员提供了快速构建分布式系统中一些常见模式的工具(例如配置管理,服务发现,断路器,智...
    卡卡罗2017阅读 135,119评论 19 139
  • afinalAfinal是一个android的ioc,orm框架 https://github.com/yangf...
    passiontim阅读 15,609评论 2 45
  • 第三画~每次描线都描的很乱,不知道为什么可塑橡皮粘不出光晕。
    缘风_阅读 401评论 1 1
  • 看着她慌慌张张跑进来的样子 特别羡慕 还有数不尽的感动 即使不是为了我 也还是感动
    whallien52阅读 284评论 0 0
  • 深夜,我想念一场雨 想念潮湿的梦境与空气 雨水亲吻大地发出沙沙地声响 悠长的在眼眸里,耳廓中生长开来 想念十五岁时...
    亘木同学阅读 318评论 4 12