本文主要是笔者为了学习python数据分析而做的一些练习,主要是利用网络爬虫获取的相关数据对上海市各个行政区的房价的性价比进行简要分析。
整个项目主要分为三个大的部分:数据的获取,数据的清洗,数据的分析
第一部分:数据的获取
本文的数据获取都是通过python相关爬虫工具包BeautifulSoup,在相关的网页和公开的API上获取的客观数据,分别包括以下两种:
(1)百度地图API上获取的数据,具体有上海市每个行政区的医院、小学、初中、公园、体育场、购物、地铁站、小区的数量。
(2)安居客网页上海二手房的相关数据,主要包括房间的价格、面积、户型等。
本文的python爬虫主要分为三个步骤:网页的获取——》网页数据的解析——》数据的储存
(1)本文数据的获取主要利用python的requests相关工具包,获取到网页的整个内容
(2)本文数据的解析主要利用python的BeautfulSoup和json相关工具包进行内容的解析,
其中百度API获的数据是json格式,采用joson.load()方法进行解析,而安居客网页的数 据则是HTML 格式,使用了BeautfulSoup的相关方法解析
(3)数据获取之后,因为本文的数据量不大,所以直接保存为本地的csv文件
主要的python代码如下所示:
#获取百度地图API的相关数据的爬虫代码
import requests
from bs4 import BeautifulSoup
import json
headers={'User-Agent':'Mozilla/3.0 (Linux; Android 8.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36'}
for i in range(1,12):
pa={'q':'','region':'上海市','scop':'2','page_size':20,'page_num':i,'output':'json','ak':'8KnqTdCS7APBHAmnOCKfvWiOkCzoAPAx'}
r =requests.get("http://api.map.baidu.com/place/v2/search",params=pa,headers=headers)
decodejson = json.loads(r.text)
print("正在获取第%d页" % i)
for each in decodejson['results']:
hospital_name=each['name']
hospital_area=each['area']
with open('desktop/huoguodian.csv','a+',encoding='gbk') as file:
file.write(hospital_name+','+hospital_area+'\n')
#获取安居客上海二手房价相关数据的python爬虫代码
import requests
from bs4 import BeautifulSoup
import time
import re
headers={'User-Agent':'Mozilla/8.0 (Linux; Android 7.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36','cookie': 'aQQ_ajkguid=5C00162A-1B7A-7120-AD52-FE08A2D96F61; _ga=GA1.2.50999661.1582604990; 58tj_uuid=48ec3204-a7eb-442a-ba99-cc74cceb8edb; als=0; sessid=F04C1E27-F27D-A7CB-EF42-F073F7C96ED1; lps=http%3A%2F%2Fhf.anjuke.com%2Fsale%2F%7C; twe=2; wmda_uuid=0558e8d2183d2c798b98f2c3b681552c; wmda_new_uuid=1; wmda_visited_projects=%3B6289197098934; _gid=GA1.2.1611413650.1582947737; ajk_member_captcha=e5970910635e72147730f07b96444b57; isp=true; Hm_lvt_c5899c8768ebee272710c9c5f365a6d8=1582962418; ctid=11; Hm_lpvt_c5899c8768ebee272710c9c5f365a6d8=1582962466; wmda_session_id_6289197098934=1582966320898-c2e4bd03-e6f9-4289; __xsptplusUT_8=1; init_refer=https%253A%252F%252Fshanghai.anjuke.com%252Fsale%252Fchongming%252Fp1%252F; new_uv=9; xzfzqtoken=hbHC%2FoyACNQofcmI%2BPpzK1CZs1ZCCnxeo7VbmksyJEQXKoHpip42xp0BO%2FIz8hHDin35brBb%2F%2FeSODvMgkQULA%3D%3D; browse_comm_ids=287563%7C373304; propertys=wq3sya-q6ggon_wpk4aw-q6gbla_w4p7w9-q6ga5l_vp567s-q68q25_; _gat=1; new_session=0; __xsptplus8=8.9.1582966321.1582966391.3%232%7Csp0.baidu.com%7C%7C%7C%25E5%25AE%2589%25E5%25B1%2585%25E5%25AE%25A2%7C%23%23PH2eSa7XGWd98maMBKXEiaIMbbq7XhkC%23'}
#area_list=['pudong','minhang','baoshan','xuhui','songjiang','jiading','jingan','putuo','yangpu','hongkou','changning','huangpu','qingpu','fengxian','jinshan','chongming']
area_list=['yangpu','hongkou','changning','huangpu','qingpu','fengxian','jinshan','chongming']#area_list=['pudong','minhang']
a=[]
b=[]
c=[]
d=[]
for k in range(0,len(area_list)):
for i in range(0,49):
link='https://shanghai.anjuke.com/sale/'+area_list[k]+'/p'+str(i)
#proxies 设置代理ip
#proxies1={'http':'58.17.125.215:53281','https':'58.17.125.215:53281'}
r=requests.get(link,headers=headers,proxies=proxies1)
print("正在获取 %s第%d页" % (area_list[k],i))
# 调用time函数 暂停几秒 防止反爬虫
time.sleep(10)
soup=BeautifulSoup(r.text)
#print(r.text)
list_house1 = soup.find_all('div',class_="house-details")
for house in list_house1:
House_describe = house.find('div',class_="details-item").text.strip()
House_address = house.find('span',class_ = "comm-address").text.strip()
a.append(House_describe)
b.append(House_address)
list_house2 = soup.find_all('div',class_='pro-price')
for house in list_house2:
House_price_det = house.find('span',class_="price-det").text.strip()
House_price_unit = house.find('span',class_="unit-price").text.strip()
c.append(House_price_det)
d.append(House_price_unit)