学习python编写爬虫第一天,学会如何解析本地网页
第1步:使用用Beautiful Soup解析网页
Soup = BeautifulSoup(wb_data,'lxml')
第2步:描述爬取信息的位置
这里使用浏览器开发者工具,选取元素,右键->Copy selector
Copy selector
可得到"body > div.main-content > ul > li > img"
代码如下:
images = Soup.select('body > div.main-content > ul > li > img')
第3步:从标签中获取所需要的信息
from bs4 import BeautifulSoupwith open('/Users/new_index.html','r') as wb_data: Soup = BeautifulSoup(wb_data,'lxml') images = Soup.select('body > div.main-content > ul > li > img') titles = Soup.select('body > div.main-content > ul > li > div.article-info > h3 > a') descs = Soup.select('body > div.main-content > ul > li > div.article-info > p.description') rates = Soup.select('body > div.main-content > ul > li > div.rate > span') cates = Soup.select('body > div.main-content > ul > li > div.article-info > p.meta-info') #print(images,titles,descs,rates,cates,sep='\n-------------\n')info = []for image,title,desc,rate,cate in zip(images,titles,descs,rates,cates): data = { 'title':title.get_text(), 'rate' :rate.get_text(), 'desc':desc.get_text(), 'cate': list(cate.stripped_strings), 'image': image.get('src') } info.append(data)for i in info: if float(i['rate'])>3: print(i['title'],i['cate'])
总结:
- Mac中使用pip(注意python的版本,python3使用pip3)和easy_install安装第三方库
- selector 和Xpath两种描述元素路径的方式
- lxml是python中比较流行的解析库,用来处理XML和HTML,第一次使用需要安装,否则会报错:
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?
运行下面的代码安装lxml
pip3 install lxml
- 在Copy selector会拷贝到某一个结点的子元素nth-child,如
ul > li:nth-child(2)
在python语言中需要改成它能理解的方式nth-of-type。
编译器报错提示为
Only the following pseudo-classes are implemented: nth-of-type.