1. 引言
利用scrapy框架爬取知乎热搜榜网站前50条热搜。
爬取信息:热搜新闻名、热搜新闻热搜量、热搜简介。
数据存储:存储为.json文件。
2.爬取流程
- 新建scrapy爬虫项目:
在终端输入以下代码,创建一个基于scrapy框架的爬虫项目,该项目为:zhihureshou。
scrapy startproject zhihureshou
- 在zhihureshou项目下新建爬虫程序文件
在终端输入以下代码,创建一个名为reshou的爬虫程序文件。
cd zhihureshou
scrapy genspider reshou s.zhihu.com
新建的项目包含以下文件:
- 编写items.py文件,定义需要获取的信息字段。
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class ZhihureshouItem(scrapy.Item):
title = scrapy.Field() #热搜标题
number = scrapy.Field() #热搜量
imgurls = scrapy.Field() #热搜简介图
# define the fields for your item here like:
# name = scrapy.Field()
#pass
- 编写settings.py程序,进行项目的一些设置
# Scrapy settings for zhihureshou project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'zhihureshou'
SPIDER_MODULES = ['zhihureshou.spiders']
NEWSPIDER_MODULE = 'zhihureshou.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zhihureshou (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'zhihureshou.middlewares.ZhihureshouSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'zhihureshou.middlewares.ZhihureshouDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
#See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
#'zhihureshou.pipelines.ZhihureshouPipeline': 300,
'zhihureshou.pipelines.SaveImagesPipeline': 300,
}
IMAGES_STORE = 'D:\\scrapy\\image7'
IMAGES_URLS_FIELD = 'image_urls'
IMAGES_RESULT_FIELD = 'images'
IMAGES_THUMBS = {'small':(80,80),'big':(300,300)}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
-
编写reshou.py文件,爬取网站信息
利用CSS选择器定位所需信息的位置,解析HTML源码。
import scrapy
from scrapy import Request
from ..items import ZhihureshouItem
class ReshouSpider(scrapy.Spider):
name = 'reshou'
allowed_domains = ['www.zhihu.com']
start_urls = ['https://www.zhihu.com/billboard']
def parse(self, response):
list_selector = response.css('.HotList-item')
for one_selector in list_selector:
title = one_selector.css(' .HotList-itemTitle::text').extract()
number = one_selector.css(' .HotList-itemMetrics::text').extract()
imgurls = one_selector.css(' img::attr(src)').extract()
item = ZhihureshouItem()
item['title'] = title
item['number'] = number
item['imgurls'] = imgurls
yield item
#pass
- 在终端输入以下程序,进行数据爬取
scrapy crawl reshou -o reshou.json
3.查看输出数据
打开.json文件时,出现如下报错,目前不知道出错原因,也不知道如何结果,希望懂的朋友评论区帮忙解决一下这个问题。
只能用编辑器打开,但打开后的数据出现乱码,解析出错。
写在最后
关于scrapy框架,大家可以参考以下文章:
功能强大的python包(八):Scrapy (网络爬虫框架)
该项目的环境配置可以参考以下文章: