前提
python
scrapy
scrapyd
背景
- scrapy 爬虫项目
- scrapyd方式部署到服务器
问题
- scrapyd.cancel(project=project, job=running_id)
- 调用cancel后,爬虫继续运行,并没有停止
解决
- 调用两次,就可以强制停止爬虫
- 可以看到日志
2022-04-15 10:20:43 [scrapy.crawler] INFO: Received SIGTERM, shutting down gracefully. Send again to force
2022-04-15 10:20:43 [scrapy.core.engine] INFO: Closing spider (shutdown)
2022-04-15 10:20:43 [scrapy.crawler] INFO: Received SIGTERM twice, forcing unclean shutdown