scrapy中间件实现增量爬虫

前言

scrapy爬取网站数据的时候,一般第一次爬取为全量爬取,以后需要的都是增量爬取,或者爬取中断之后需要继续爬取,那么这都需要爬取剩余未爬取的,而已经爬取过的则不需要爬取。为了提高爬取效率,已经爬取过的地址最好通过判断是否爬取,如果爬取过则丢弃,否则交给调度器,由调度器安排爬取。

image.png

根据爬虫框架的结构图可知,scrapy中有两个重要的中间件,一个是Downloader Middlewares一个是Spider Middlewares 在spiders中yield scrapy.Request()的请求都会经过spiderMiddlewares。查看官方文档关于scrapy.contrib.spidermiddleware.SpiderMiddleware的process_spider_output(response, result, spider)方法的介绍:

当Spider处理response返回result时,该方法被调用

可知在spider的parse方法中yied的item对象和request对象都会调用该方法。那么增量爬虫的判断是否爬取过,如果爬取过则丢弃,否则交给调度器,这一功能可在此实现。本篇在不改变原来spider的基础上,通过中间件实现增量爬虫。

具体实现

步骤一、

新建数据库操作文件db.py实现的功能:

  • mysql数据库的配置信息
  • 根据origin_url字段判断url在数据库中是否已经存在
  • 管道中使用的插入数据方法
  • 为避免sql查询时的数据库连接的反复建立,使用单例模式
#!/usr/bin/env python

# -*- encoding: utf-8 -*-
import pymysql
import logging

class DB_MySQL():
    '''数据库操作类'''
    HOST = 'localhost'
    DBNAME = 'hebei'
    USER = 'root'
    PASSWD = '123456'
    PORT = '3306'
    CHARSET = 'utf8'
    def __init__(self):
        self.conn = pymysql.connect(host=self.HOST, port=int(self.PORT), user=self.USER, passwd=self.PASSWD,
                                    db=self.DBNAME, charset=self.CHARSET)
        self.cur = self.conn.cursor()
    # 插入数据
    def insert(self, item):
        try:
            fields = item.keys()
            sql = 'insert into news(%s) value(%s)' % (','.join(fields), ','.join(['%s']*len(fields)))
            self.cur.execute(sql,[item[x] for x in fields])
            self.conn.commit()
        except Exception as e:
            logging.error('mysql插入数据执行异常: %s' % str(e))

    # 判断url是否已经存在
    def url_is_exist(self, url):
        try:
            if self.cur.execute('select 1 from news where origin_url = %s limit 1', (url,)):
                return True
            else:
                return False
        except Exception as e:
            logging.error('mysql查询origin_url是否存在执行异常: ' + str(e))

    def close(self):
        self.cur.close()
        self.conn.close()

db_mysql = DB_MySQL()

中间件实现

process_spider_output()方法中需要先判断对象是否为Request对象,然后获取该对象的url属性,并判断该url是否已经存在,如果存在则yield None。

class HbPolicyNewsSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            if isinstance(i, Request):
                referer = i.headers[b'Referer'] if b'Referer' in i.headers.keys() else ''
                if db_mysql.url_is_exist(i.url):
                    spider.logger.debug('url已存在丢弃请求:%s ,referer信息: %s' % (i.url, referer))
                    yield None
                else:
                    spider.logger.debug('新url请求:%s ,referer信息: %s' % (i.url, referer))
                    yield i
            else:
                yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

settings.py中启用中间件

SPIDER_MIDDLEWARES = {
   'hb_policy_news.middlewares.HbPolicyNewsSpiderMiddleware': 543,
}
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容