scrapy爬虫实现爬取图片(通过图片管道)

非结构化数据抓取流程

1.抓取网络数据包

2、F12抓包,抓取到json地址 和 查询参数(QueryString)
url = 'https://image.so.com/zjl?ch=beauty&t1=595&src=banner_beauty&sn={}&listtype=new&temp=1'.format(sn)
ch: beauty
t1: 595
src: banner_beauty
sn: 90
listtype: new
temp: 1

项目实现

1.创建爬虫项目和爬虫文件

scrapy startproject So
cd So
scrapy genspider so image.so.com

2.响应对象属性及方法

 # 属性
1、response.text :获取响应内容 - 字符串
2、response.body :获取bytes数据类型
3、response.xpath('')

  # response.xpath('')调用方法
1、结果 :列表,元素为选择器对象
  # <selector xpath='//article' data=''>
2、.extract() :提取文本内容,将列表中所有元素序列化为Unicode字符串
3、.extract_first() :提取列表中第1个文本内容
4、.get() : 提取列表中第1个文本内容

代码实现

1.定义要爬取的数据结构(items.py)
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class SoItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    img_link = scrapy.Field()
2.爬虫文件实现图片链接爬取,把链接yield到项目管道
# -*- coding: utf-8 -*-
import scrapy
import json
from ..items import SoItem

class SoSpider(scrapy.Spider):
    name = 'so'
    allowed_domains = ['image.so.com']

    # 重写start_requests()方法,把所有URL地址都交给调度器
    def start_requests(self):
        url = "https://image.so.com/zjl?ch=beauty&t1=595&src=banner_beauty&sn={}&listtype=new&temp=1"
        #生成5页地址,交给调度器
        for i in range(5):
            sn = i * 30
            full_url = url.format(sn)
            # 交给调度器
            yield scrapy.Request(
                url = full_url ,
                callback=self.parse_image
            )

    def parse_image(self, response):
        html = json.loads(response.text)
        #提取图片链接
        for img in html["list"]:
            item = SoItem()
            item["img_link"] = img["qhimg_url"]
            yield item
3.pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

#导入scrapy的图片管道类
from scrapy.pipelines.images import ImagesPipeline
import scrapy

#1.继承 ImagesPipeline
#2.重写类内方法
class SoPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        #把图片链接发给调度器
        yield scrapy.Request(url = item['img_link'])
4.settings.py

定义图片存储路径:IMAGES_STORE = 'D:\node\nd\spider\day09\photo\'
打开CONCURRENT_REQUESTS = 10,最大并发量,注意以下没注释的代码。

# -*- coding: utf-8 -*-

# Scrapy settings for So project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'So'

SPIDER_MODULES = ['So.spiders']
NEWSPIDER_MODULE = 'So.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'So (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#最大并发量,默认16
CONCURRENT_REQUESTS = 10

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  'User-Agent':'Mozilla/5.0'
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'So.middlewares.SoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'So.middlewares.SoDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'So.pipelines.SoPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

IMAGES_STORE = 'D:\\node\\nd\\spider\\day09\\photo\\'

5.爬取结束后,会在目录下生成full文件,里面有我想爬取的150文件,

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 本文承接上一篇爬虫开篇的说明----上一篇已经很好的用到了reqquests,Beautifulsoup等库,以及...
    strive鱼阅读 2,043评论 0 4
  • Selectors 在抓取一个web页面的时候,大多数任务在于从HTML源中提取数据。有很多可用的的库支持这些操作...
    别摸我蒙哥阅读 5,516评论 0 0
  • 引 在简书中有很多主题频道,里面有大量优秀的文章,我想收集这些文章用于提取对我有用的东西; 无疑爬虫是一个好的选择...
    虎七阅读 1,426评论 0 3
  • 强项是自己最擅长的技能或某方面的优势,我的强项应该是脸皮比较厚吧,就是事情不往心里去。 年纪越大越看清了一些事情,...
    scarle阅读 4,601评论 0 0
  • 1.今天是有计划变动的一天,我早上加了单词环节!导致今天的节奏有点点受影响。而且今天头还很痛痛!但是中午的背诵以及...
    65dee3a95f79阅读 226评论 0 0