Generic spider

use case - generic spider has useful methods for common crawling actions such as following all links on a site based on certain rules, crawling from Sitemaps, or parsing an XML/CSV feed

CrawlSipder

rules - objects that define crawling behavior
parse_start_url - a method that can be overriden to parse the initial responses and must return

  • either an Item
    object

  • a Request
    object

  • or an iterable containing any of them

Rules

scrapy.spiders.Rule
can declare multiple rules for followed links, always add a , at the end

Paste_Image.png
  • link_extractor defines how links will be extracted from each crawled page
  • allow/deny - only allow or ignore domains
  • callback - calling methods to perform crawling on the response; if no callback is specified, follow is default to True

avoid calling parse since this is reserved for CrawlSpider to use it to set up the rules

  • follow - a boolean if set to true extract all links on the page

Scrapy filter out duplicate link by default
beware that start_urls should not contain trailling slash
works

Paste_Image.png

does not work
Paste_Image.png

  • process_links - filter purpose
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容