环境
- ubuntu 14.04 LTS
- Scrapy 1.4.0
爬取链接
爬取步骤
- 在浏览器中打开爬取链接, 并进入审查元素模式(F12), 定位到要爬取的标题。
- 在终端中输入
scrapy shell "http://search.51job.com/list/030200,000000,0000,00,9,99,python,2,1.html?lang=c&stype=1&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare="
- 使用Scrapy选择器(Selectors)获取岗位标题
# 输入, 由步骤1中知道css位置为 p > a
response.css('p a::attr(title)').extract_first()
# 输出
u'Python\u5f00\u53d1\u5de5\u7a0b\u5e08'
- 获取公司名称
# 输入
response.css('span.t2 a::attr(title)').extract_first()
# 输出
u'\u4e0a\u6d77\u65b0\u5de5\u5f0f\u7f51\u7edc\u79d1\u6280\u6709\u9650\u516c\u53f8'
- 获取其它信息类似,这里不多说,Selectors使用方法请参考
https://doc.scrapy.org/en/1.4/topics/selectors.html
编写整体代码
- 因为每个岗位的css结构为
div.dw_table div.el
, 所以在代码中要把它遍历出来
#coding:utf-8
"""
author: ilyq69
date: 20170828
"""
import scrapy
class FiveOneJobSpider(scrapy.Spider):
name = 'fiveOnejob'
start_urls= [
'http://search.51job.com/list/030200,000000,0000,00,9,99,python,2,1.html?lang=c&stype=1&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare='
]
def parse(self, response):
'''
:param response:
:return:
'''
for item in response.css('div.dw_table div.el'):
yield {
"title": item.css('p a::attr(title)').extract_first(),
"link": item.css('p a::attr(href)').extract_first(),
"company": item.css('span.t2 a::attr(title)').extract_first(),
"city": item.css('span.t3 ::text').extract_first(),
"salary": item.css('span.t4 ::text').extract_first(),
"createTime": item.css('span.t5 ::text').extract_first()
}
next_page = response.css('div.p_in ul li.bk a::attr(href)').extract()
if next_page is not None:
if len(next_page) > 1:
yield response.follow(next_page[1], callback=self.parse)
else:
yield response.follow(next_page[0], callback=self.parse)
运行测试
scrapy runspider fiveOneJob_spider.py -o job.json
参考
最后