Scrapy_redis(三十二)

1、Scrapy_redis(先进行Scrapy_redis,在把项目部署到Scrapyd中)

1、1 特点和架构
  • scrapy_redis是一个基于Redis的Scrapy组件,用于scrapy项目的分布式部署和开发。
  • 特点:
    (1)分布式爬取(使各个主机能够进行同一件事情,scrapy本身不支持分布式爬取)
    你可以启动多个spider对象,互相之间共享有一个redis的request队列。最适合多个域名的广泛内容的爬取。
    (2)分布式数据处理
    爬取到的item数据被推送到redis中,这意味着你可以启动尽可能多的item处理程序。
    (3)scrapy即插即用(Scrapy_redis是scrapy的一个插件,依托于scrapy)
    scrapy调度程序+过滤器,项目管道,base spidre,使用简单。


注意:在scrapy的基本流程中,调度器是把请求放到了内存中;而这里是把请求放到了Redis数据库中,以便存在Redis数据中的请求队列被多个机器共享

1、2 安装与使用
  • 一般通过pip安装Scrapy-redis:
    pip install scrapy-redis

  • scrapy-redis依赖:
    Python 2.7, 3.4 or 3.5
    Redis >= 2.8
    Scrapy >= 1.1
    redis-py >= 2.10

  • scrapy-redis的使用非常简单,几乎可以并不改变原本scrapy项目的代码,只用做少量设置。

1、3 常用设置
  • 启用调度将请求存储进redis,是在settings中做以下配置(scrapy_redis配置,配置调度器)---必须配置项。
    SCHEDULER = "scrapy_redis.scheduler.Scheduler"

  • 确保所有spider通过redis共享相同的重复过滤。(配置重复过滤器)--必须配置项
    DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

  • 指定连接到Redis时要使用的主机和端口。(对redis数据库进行配置)--必须配置项
    REDIS_HOST = 'localhost'
    REDIS_PORT = 6379

  • 不清理redis队列,允许暂停/恢复抓取。----可选配置项
    SCHEDULER_PERSIST = True

官方文档:https://scrapy-redis.readthedocs.io/en/stable/

1、4 redis中存储的数据
  • spidername:items
    list类型,保存爬虫获取到的数据item内容是json字符串。

  • spidername:dupefilter
    set类型,用于爬虫访问的URL去重内容是40个字符的url的hash字符串

  • spidername:start_urls
    list类型,用于接收redisspider启动时的第一个url

  • spidername:requests
    zset类型,用于存放requests等待调度。内容是requests对象的序列化字符串。

2、项目案例(爬取中国图书网中的部分书籍信息)

2、1 booksys.py
# -*- coding: utf-8 -*-
import scrapy
from lxml import etree
from ..items import LibrarysystemItem


class BooksysSpider(scrapy.Spider):
    name = 'booksys'
    # allowed_domains = ['http://www.bookschina.com/']
    # start_urls = ['http://www.bookschina.com/kinder/63000000/','http://www.bookschina.com/kinder/51000000/','http://www.bookschina.com/kinder/27000000/','http://www.bookschina.com/kinder/31000000/','http://www.bookschina.com/kinder/35000000/','http://www.bookschina.com/kinder/64000000/']
    start_urls = ['http://www.bookschina.com/kinder/31000000/']

    def parse(self, response):
        sa111 = response.xpath('//div[@class="w1200"]/div[@id="container"]/div[@class="listMain clearfix"]/div[@class="listLeft"]/div[@class="bookList"]/ul/li')
        if sa111:
            for content in sa111:
                li = []
                content = content.extract()
                content = etree.HTML(content)
                item = LibrarysystemItem()
                item = dict(item)
                item['imagename'] = content.xpath('//div[@class="cover"]/a/img/@src')[0].strip()
                item['bookname'] = content.xpath('//div[@class="infor"]/h2/a/text()')[0].strip()
                item['author'] = content.xpath('//div[@class="otherInfor"]/a/text()')[0].strip()
                item['discount'] = content.xpath('//div[@class="priceWrap"]/span[@class="sellPrice"]/text()')[0].strip()
                item['price'] = content.xpath('//div[@class="priceWrap"]/del/text()')[0].strip()
                item['linkaddress'] = 'http://www.bookschina.com'+content.xpath('//div[@class="infor"]/h2/a/@href')[0].strip()
                item['introduction'] = content.xpath('//p[@class="recoLagu"]/text()')
                if item['introduction'] == li:
                    item['introduction'] = "此书无简介"
                else:
                    item['introduction'] = item['introduction'][0].strip().replace('\r', '')
                yield item
            next_url = response.xpath('//div[@class="pagination"]/div[@class="paging"]/ul/li[@class="next"]')
            next_url = next_url.xpath('//li[@class="next"]/a/@href').extract_first()
            yield scrapy.Request('http://www.bookschina.com/%s'% next_url)

2、2 items.py
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class LibrarysystemItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # image = scrapy.Field()
    # bookname1 = scrapy.Field()
    # author = scrapy.Field()
    # discountprice = scrapy.Field()
    # originalprice = scrapy.Field()
    # linkaddress = scrapy.Field()
    # introduce = scrapy.Field()
    imagename = scrapy.Field()
    bookname = scrapy.Field()
    author = scrapy.Field()
    discount = scrapy.Field()
    price = scrapy.Field()
    linkaddress = scrapy.Field()
    introduction = scrapy.Field()

2、3 middlewares.py
# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
import random


class LibrarysystemSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)


class LibrarysystemDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)


class RandomUserAgentMiddleWare(object):
    def __init__(self, user_agents):
        self.user_agents = user_agents

    @classmethod
    def from_crawler(cls,crawler):
        s=cls(user_agents=crawler.settings.get('MY_USER_AGENT'))
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        agent=random.choice(self.user_agents)
        request.headers['user_agent']=agent
        return None

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

2、4 pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json
from .items import LibrarysystemItem
import pymysql
import logging

logger = logging.getLogger(__name__)


class LibrarysystemPipeline(object):
    #做数据持久化
    def process_item(self, item, spider):
        # self.f.write(json.dumps(item, ensure_ascii=False))
        # self.f.write('\n')
        # return item
        if item:
            sql = 'select ID from booktable WHERE bookname=%s AND  author=%s'
            self.cursor.execute(sql, (item['bookname'], item['author']))
            if self.cursor.fetchone():
                pass
            else:
                try:
                    print("正在爬取中"+item['linkaddress']+"网站的书籍信息")
                    sql='insert into booktable(imagename,bookname, author,discount,price,linkaddress,introduction) VALUES(%s,%s,%s,%s,%s,%s,%s)'
                    self.cursor.execute(sql, (
                        item['imagename'],
                        item['bookname'],
                        item['author'],
                        item['discount'],
                        item['price'],
                        item['linkaddress'],
                        item['introduction']
                    ))
                    self.conn.commit()
                except Exception as e:
                    self.conn.rollback()
                    print("出现一下错误:")
                    logger.warning('书籍信息写入错误 url =%s %s' % (item['linkaddress'], e))
        else:
            logger.info('不存在')

    def open_spider(self, spider):
        data_config = spider.settings['DATABASE_CONFIG']
        if data_config['type'] == 'mysql':
            self.conn = pymysql.connect(**data_config['config'])
            self.cursor = self.conn.cursor()
            spider.conn = self.conn
            spider.cursor = self.cursor

    def close_spider(self, spider):
        data_config = spider.settings['DATABASE_CONFIG']
        if data_config['type'] == 'mysql':
            self.cursor.close()
            self.conn.close()
2、5 scrapy.cfg

[settings]
default = librarysystem.settings

[deploy:librarysystem]
url = http://192.168.212.131:6800/
project = librarysystem

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容