微博爬虫(第1版)

"""
获取微博数据 @一卒同学
"""
from requests import get
from csv import DictWriter

UID = '1076031742566624'
PAGE = 1

url_index = "https://m.weibo.cn/api/container/getIndex"
params = {
    "containerid": UID,
    "page": PAGE
}
return_data = get(url_index, params=params).json()
result = []
for d in return_data['data']['cards']:
    result.append(
        {'mid': (d['mblog']['mid']), '转发': (d['mblog']['reposts_count']), '评论': (d['mblog']['comments_count']),
         '点赞': (d['mblog']['attitudes_count']),
         '正文': (d['mblog']['text'])})

headers = ['mid', '转发', '评论', '点赞', '正文']
with open(UID + '.csv', 'a', encoding='utf-8') as f:
    f_csv = DictWriter(f, headers)
    f_csv.writeheader()
    f_csv.writerows(result)
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容