国庆回了一趟松江大学城,好吃的真是越来越多了,可惜只有一个胃。回来后突然有了灵感,不如分析看看大学城什么类型的美食最多吧!
为什么选择饿了么这个平台呢,因为当年外卖APP还没有火的时候,我们就是在网页版的饿了么上叫外卖的,也算是一种情怀吧~
数据来源平台:饿了么
地点选择:松江大学城四期
抓取地址:https://www.ele.me/place/wtw0tgvd7yr(翻页链接:https://www.ele.me/restapi/shopping/restaurants?geohash=wtw0tgvd7yr&latitude=31.04641&limit=0&longitude=121.19791&offset=24&terminal=web)
抓取数据:只抓取了店名(name)和店的口味(flavors)
分析内容:1、对店名进行分词然后词频统计绘制词云图,因为结巴分词对小吃的分词不是很准确,所以在分词时加入了自己的字典fooddic;
2、对口味进行统计绘制条形图(偷懒没有排序)
爬虫和绘图代码:
# -*- coding: utf-8 -*-
"""
Created on Thu Oct 11 09:05:59 2018
@author: Shirley
"""
import requests
import json
import re
import csv
from collections import defaultdict
import jieba
from wordcloud import WordCloud as wd#词云
from PIL import Image#打开图片,用于词云背景层
import numpy as np#转换图片,用于词云背景层
import matplotlib.pyplot as plt#绘图
from matplotlib.font_manager import FontProperties#中文显示
font = FontProperties(fname=r"D:\anaconda\shirleylearn\cipintongji\simsun.ttf", size=14)#设置中文字体
data = []
restaurants = []
foodtype = []
def Getdata(page):#爬虫
url = "https://www.ele.me/restapi/shopping/restaurants?geohash=wtw0tgvd7yr&latitude=31.04641&limit=24&longitude=121.19791&offset=%d&terminal=web"%page
headers = {"accept":"application/json, text/plain, */*",
"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36",
"cookie":"登陆后的cookie"}
html = requests.get(url,headers=headers)
content = re.findall(r'"flavors":.*?,"next_business_time"',html.text)#用正则获取包含数据的那部分
for con in content:
jsonstring = "{" + con.replace(',"next_business_time"',"}")#完善格式,使其成为准确的json格式
jsonobj = json.loads(jsonstring)
restaurant_id = jsonobj["id"]
restaurant_name = jsonobj["name"].encode("gbk","ignore").decode("gbk")
flavors = jsonobj["flavors"]
restaurant_type = []
for f in flavors:#有些flavors中只有一个值,有些有2个,所以要for循环
restaurant_type.append(f["name"])
restaurants.append(restaurant_name)#用于后面词云图
foodtype.append(restaurant_type)#用于后面条形图
data.append([restaurant_id,restaurant_name,restaurant_type])
with open("elemedata.csv","w",newline="") as f:#保存数据到本地
writer = csv.writer(f)
writer.writerow(["restaurant_id","restaurant_name","restaurant_type"])
for d in data:
writer.writerow(d)
return restaurants,foodtype#返回值应用到下面2个函数
def Eleme_wordcloud(restaurants):#词云图
jieba.load_userdict("D:/anaconda/shirleylearn/eleme/fooddic.txt")
text = ""
for i in restaurants:
name = re.sub(r'(.*',"",i)
name = re.sub(r'\(.*',"",name)
text = text + " " + name
fenci = jieba.lcut(text)
wordfrequency = defaultdict(int)
for word in fenci:
if word != " ":
wordfrequency[word] += 1#词频统计
img = Image.open("D:/anaconda/shirleylearn/eleme/bowl.jpg")#打开图片
myimg = np.array(img)#转换图片
path = "D:/anaconda/shirleylearn/eleme/simsun.ttf"
wordcloud = wd(width=1000,height=860,margin=2,font_path=path,background_color="white",max_font_size=100,mask = myimg).fit_words(wordfrequency)#根据词频字典生成词云
plt.imshow(wordcloud)
plt.axis('off')#不显示坐标轴
plt.savefig('eleme_wordcloud.png', dpi=300)
plt.clf()# 清除当前 figure 的所有axes,但是不关闭这个 window,所以能继续复用于其他的 plot。否则会影响下面的绘图
def Eleme_bar(foodtype):#条形图
#foodtype的格式:[['盖浇饭', '简餐'],['川湘菜', '简餐'],['日韩料理']]
wordfrequency2 = defaultdict(int)
foodtypes = []#放总的类型,有重复项
types = []#放词频统计后的类型,无重复项
numbers = []#放词频统计后的词频
for f in foodtype:
for t in f:
foodtypes.append(t)#把每个词汇总到列表中
for type in foodtypes:
wordfrequency2[type] += 1#用字典进行词频统计
for key in wordfrequency2:
types.append(key)
numbers.append(wordfrequency2[key])
plt.bar(range(len(types)),numbers)
plt.xticks(range(len(types)),types,fontproperties = font,fontsize=5,rotation=90)
plt.savefig('eleme_bar.png', dpi=300)
plt.show()
plt.clf()
if __name__ == '__main__':
for p in range(0,24):
page = p*24
restaurants,foodtype = Getdata(page)
Eleme_wordcloud(restaurants)
Eleme_bar(foodtype)
运行中遇到的问题:运行界面只显示了条形图,没有显示词云图,不过保存下来的图是正确的。
词云图
看起来在大学城米饭类要比面食类更受欢迎,粥、香锅、麻辣烫也是我读大学的时候经常吃的。
对于鲜花这个词,我查看了原始数据,确实外送的花店比较多,但是距离都较远,实际上大多可以排除在大学城外。
条形图
除简餐外,盖浇饭、地方小吃、米粉面馆是三巨头,甜品、奶茶和炸鸡也是大家的心头好。