通过核心API启动单个或多个scrapy爬虫

1. 可以使用API从脚本运行Scrapy,而不是运行Scrapy的典型方法scrapy crawl;Scrapy是基于Twisted异步网络库构建的,因此需要在Twisted容器内运行它,可以通过两个API来运行单个或多个爬虫scrapy.crawler.CrawlerProcess、scrapy.crawler.CrawlerRunner。

2. 启动爬虫的的第一个实用程序是scrapy.crawler.CrawlerProcess 。该类将为您启动Twisted reactor,配置日志记录并设置关闭处理程序,此类是所有Scrapy命令使用的类。

示例运行单个爬虫:

交流群:1029344413 源码、素材学习资料

import

scrapyfromscrapy.crawlerimport CrawlerProcessclass MySpider(scrapy.Spider):

    # Your spider definition    ...

process = CrawlerProcess({

    'USER_AGENT':'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'})

process.crawl(MySpider)

process.start() # the script will block here until the crawling is finished

通过CrawlerProcess传入参数,并使用get_project_settings获取Settings 项目设置的实例。

fromscrapy.crawlerimport CrawlerProcessfromscrapy.utils.projectimport get_project_settings

process = CrawlerProcess(get_project_settings())# 'followall' is the name of one of the spiders of the project.process.crawl('followall', domain='scrapinghub.com')

process.start() # the script will block here until the crawling is finished

还有另一个Scrapy实例方式可以更好地控制爬虫运行过程:scrapy.crawler.CrawlerRunner。此类封装了一些简单的帮助程序来运行多个爬虫程序,但它不会以任何方式启动或干扰现有的爬虫。

使用此类,显式运行reactor。如果已有爬虫在运行想在同一个进程中开启另一个Scrapy,建议您使用CrawlerRunner 而不是CrawlerProcess。

注意,爬虫结束后需要手动关闭Twisted reactor,通过向CrawlerRunner.crawl方法返回的延迟添加回调来实现。


下面是它的用法示例,在MySpider完成运行后手动停止容器的回调。

fromtwisted.internetimport reactorimport scrapyfromscrapy.crawlerimport CrawlerRunnerfromscrapy.utils.logimport configure_loggingclass MySpider(scrapy.Spider):

    # Your spider definition    ...

configure_logging({'LOG_FORMAT':'%(levelname)s: %(message)s'})

runner = CrawlerRunner()

d = runner.crawl(MySpider)

d.addBoth(lambda _: reactor.stop())

reactor.run() # the script will block here until the crawling is finished

在同一个进程中运行多个蜘蛛

默认情况下,Scrapy在您运行时为每个进程运行一个蜘蛛。但是,Scrapy支持使用内部API为每个进程运行多个蜘蛛。

这是一个同时运行多个蜘蛛的示例:

import scrapyfromscrapy.crawlerimport CrawlerProcessclass MySpider1(scrapy.Spider):

    # Your first spider definition    ...class MySpider2(scrapy.Spider):

    # Your second spider definition    ...

process = CrawlerProcess()

process.crawl(MySpider1)

process.crawl(MySpider2)

process.start() # the script will block here until all crawling jobs are finished

使用CrawlerRunner示例:

import scrapyfromtwisted.internetimport reactorfromscrapy.crawlerimport CrawlerRunnerfromscrapy.utils.logimport configure_loggingclass MySpider1(scrapy.Spider):

    # Your first spider definition    ...class MySpider2(scrapy.Spider):

    # Your second spider definition    ...

configure_logging()

runner = CrawlerRunner()

runner.crawl(MySpider1)

runner.crawl(MySpider2)

d = runner.join()

d.addBoth(lambda _: reactor.stop())

reactor.run() # the script will block here until all crawling jobs are finished

相同的示例,但通过异步运行爬虫蛛:

fromtwisted.internetimport reactor, deferfromscrapy.crawlerimport CrawlerRunnerfromscrapy.utils.logimport configure_loggingclass MySpider1(scrapy.Spider):

    # Your first spider definition    ...class MySpider2(scrapy.Spider):

    # Your second spider definition    ...

configure_logging()

runner = CrawlerRunner()

@defer.inlineCallbacksdef crawl():

    yield runner.crawl(MySpider1)

    yield runner.crawl(MySpider2)

    reactor.stop()

crawl()

reactor.run() # the script will block here until the last crawl call is finished

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容