2022-12-13亲测有效!用朱一龙的微博为例来爬取。
先登录来到个人微博相册页面,右键检查进入开发者平台,重新加载一下
image.png
可以看到有个getImageWall开头的请求,就是在这个请求做文章!
查看这个请求的Headers和Preview
image.png
有几个重要的参数
1、请求时需要带上的请求头 Request Headers
cookie: xxxx(太长省略)
referer: https://weibo.com/zhuyilong?tabtype=album
sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="91", "Chromium";v="91"
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36
具体参数的作用,查百度
2、请求参数 Query String Parameters
uid:博主对应的uid,不同的博主有不同的uid,所以只需要传入不同的uid就可以爬取不同人的相册
sinceid:此次请求的sinceid,每次翻页后请求的sinceid都不一样
bottom_tips_text:这个一般没有值,如果翻到底遇到博主设置了半年或多久的访问权限的时候,这里会有值
album_since_id:首次进入相册页面,会初始化加载一些图片,请求就是利用这个值去后台初始化图片
pid:加后缀.jpg 用来拼接成图片真正的地址
since_id:表示的是继续翻页的请求since_id,比如这里是4783943357432101_4783100611004581|1034:4783100039987251_20220624_-1
往下翻页继续加载图片
仔细看框起来的部分,发现了吗sinceid的值和上面的4783943357432101_4783100611004581|1034:4783100039987251_20220624_-1一模一样
如果since_id为0,表示加载到底了,所以图片加载完
所有代码如下:
import os
import sys
import ast
import random
import time
from urllib import request
# 将common.py所在的文件夹路径放入sys.path中
sys.path.append('../common')
import myCommon
headers = {
'cookie' : 'SINAGLOBAL=6478536967554.643.1666140599036; XSRF-TOKEN=Q99Q_5CgR1pZQdZQ36E_b47R; _s_tentry=weibo.com; Apache=8558559404898.318.1670832406139; ULV=1670832406184:3:1:1:8558559404898.318.1670832406139:1668732678999; login_sid_t=3ae99e0af5bc58d007607fa93d1db71e; cross_origin_proto=SSL; wb_view_log=1920*10801.25; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5GRWC0yey_mP4FD0IQ9iqD5JpX5o275NHD95QfS050Soq41hMcWs4DqcjMi--NiK.Xi-2Ri--ciKnRi-zNSKM7e0qc1KnNSntt; SSOLoginState=1670896102; SCF=And55E5fnrPGnVibFfBNoYBfwXsROEmg6CwEE0D42QBRbLMuIu5KwT0HpIDLC2URaezn_JERbxJe_ZeBp37bIUU.; SUB=_2A25Ok6m3DeRhGeNJ61EX9ifEzDqIHXVt6Jx_rDV8PUNbmtANLW_XkW9NSBPglFMKbX_6GU3_FjJtvyohwp1wp2mr; ALF=1702432100; WBPSESS=Dt2hbAUaXfkVprjyrAZT_K05XEZoWj5kQg_gEmqZBSaZdjuJlCQfLy--8sKLiZYcS1FfHthzJLHEITOT2g7a94cfp3akRj-fYGnfG5KlzZ9Gtpb3mVbZGMSVUpYYX7oeQdG14WgqeLp0FCCq6ARj_bc-BYDZl3Tob1m7f1lFDaF5b0yP1HazBboPgosvJkBTpF0JJaTF4WmAXsFXbEG3lQ==',
'referer' : 'https://weibo.com/u/3175636947?tabtype=album',
'sec-ch-ua' : ' Not;A Brand";v="99", "Google Chrome";v="91", "Chromium";v="91',
'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
def main(screen_name,u_id,since_id):
if not os.path.exists(f'./{screen_name}/'):
os.makedirs(f'./{screen_name}/')
while 1:
# 拼接url
URL = mergeUrl(u_id,since_id)
# 需要使用url和headers生成一个Request对象,然后将其传入urlopen方法中
req = request.Request(URL, headers=headers)
resp = request.urlopen(req)
if resp.getcode() == 200:
content = resp.read()
# json字符串转字典
dicContent = myCommon.jsonToDic(content)
# bottom_tips_text
bottom_tips_text = dicContent['bottom_tips_text']
if bottom_tips_text == '博主设置仅展示半年内的内容':
print('博主设置仅展示半年内的内容')
break
# 处理list
picList = dicContent['data']['list']
if len(picList) == 0:
break
for item in picList:
# picType = item['type']
# if picType != 'pic':
# continue
pid = item['pid']
picUrl = mergePicUrl(pid)
print(f'开始请求{picUrl}')
req = request.Request(picUrl, headers=headers)
resp = request.urlopen(req)
if resp.getcode() == 200:
print(f'请求成功,开始创建图片。')
with open(f'./{screen_name}/{pid}.jpg',mode='wb') as w:
w.write(resp.read())
if os.path.isfile(f'./{screen_name}/{pid}.jpg'):
print(f'创建图片{pid}.jpg成功。')
else:
print(f'创建图片{pid}.jpg失败。')
else:
print(f'请求{URL}失败')
print(f'返回码:{resp.getcode()},程序停止。')
break
# 处理下一个since_id的url
since_id = dicContent['data']['since_id']
if since_id == 0:
print('到底啦,没有更多内容了。')
break
time.sleep(1)
else:
print(f'请求{URL}失败')
print(f'返回码:{resp.getcode()},程序停止。')
break
# 拼接url
def mergeUrl(uid,sinceid):
baseUrl = 'https://weibo.com/ajax/profile/getImageWall'
if sinceid == '0':
URL = f'{baseUrl}?uid={uid}&sinceid={sinceid}&has_album=true'
else:
URL = f'{baseUrl}?uid={uid}&sinceid={sinceid}'
return URL
# 拼接照片链接
def mergePicUrl(picName):
# 随机数1 - 4
randomNum = random.randint(1,4)
baseUrl = f'https://wx{randomNum}.sinaimg.cn/orj360/'
return f'{baseUrl}{picName}.jpg'
if __name__ == '__main__':
print('开始')
main('朱一龙',u_id='1594052081',since_id='0')
print('结束')
效果
同样,爬取视频的方法也差不多,核心就是
1、拿本次请求中所带的sinceid(相册)或者cursor(视频)去做下一次的请求,代码以下
2、找出请求参数中的必须字段
import os
import sys
import ast
import random
import time
from urllib import request
# 将common.py所在的文件夹路径放入sys.path中
sys.path.append('../common')
import myCommon
headers = {
'cookie' : 'xxxxx',
'referer' : 'https://weibo.com/u/6078326748?tabtype=newVideo',
'sec-ch-ua' : ' Not;A Brand";v="99", "Google Chrome";v="91", "Chromium";v="91',
'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
def main(screen_name,u_id,cursor):
if not os.path.exists(f'./Video/{screen_name}/'):
os.makedirs(f'./Video/{screen_name}/')
while 1:
# 拼接url
URL = mergeUrl(u_id,cursor)
# 需要使用url和headers生成一个Request对象,然后将其传入urlopen方法中
req = request.Request(URL, headers=headers)
resp = request.urlopen(req)
if resp.getcode() == 200:
content = resp.read()
# json字符串转字典
dicContent = myCommon.jsonToDic(content)
picList = dicContent['data']['list']
if len(picList) == 0:
break
for item in picList:
try:
kol_title = item['page_info']['media_info']['kol_title']
media_id = item['page_info']['media_info']['media_id']
temp = item['page_info']['media_info']['mp4_720p_mp4']
reqUrl = temp[temp.index('.com/o0/'):temp.index('.mp4?')].replace('.com/o0/', ' ').strip()
expries = temp[temp.index('Expires='):]
except:
continue
else:
# print(reqUrl,expries)
# return
filePath = f'./Video/{screen_name}/{media_id}.mp4'
if os.path.isfile(filePath):
continue
videoUrl = mergeVideoUrl(req_url=reqUrl,media_id=media_id,expries=expries)
print(f'开始请求{videoUrl}')
req = request.Request(videoUrl, headers=headers)
resp = request.urlopen(req)
if resp.getcode() == 200:
print(f'请求成功,开始生成视频。')
with open(filePath,mode='wb') as w:
w.write(resp.read())
if os.path.isfile(filePath):
print(f'生成视频{media_id}.mp4成功。')
else:
print(f'生成视频{media_id}.mp4失败。')
else:
print(f'请求{URL}失败')
print(f'返回码:{resp.getcode()},程序停止。')
break
# 处理下一个cursor的url
cursor = dicContent['data']['next_cursor']
if cursor == -1:
print('到底啦,没有更多内容了。')
break
time.sleep(1)
else:
print(f'请求{URL}失败')
print(f'返回码:{resp.getcode()},程序停止。')
break
# 拼接url
def mergeUrl(uid,cursor):
baseUrl = 'https://weibo.com/ajax/profile/getWaterFallContent'
URL = f'{baseUrl}?uid={uid}&cursor={cursor}'
# if cursor == '0':
# URL = f'{baseUrl}?uid={uid}&cursor={cursor}'
# else:
# URL = f'{baseUrl}?uid={uid}&cursor={cursor}'
return URL
# 拼接照片链接
def mergeVideoUrl(req_url,media_id,expries):
# 随机数1 - 4
randomNum = random.randint(1,4)
baseUrl = f'https://f.video.weibocdn.com/o0/'
return f'{baseUrl}{req_url}.mp4?label=mp4_720p&template=720x1280.24.0&media_id={media_id}&tp=8x8A3El:YTkl0eM8&us=0&ori=1&bf=4&ot=v&lp=000023Bqyg&ps=mZ6WB&uid=6e015q&ab=9298-g4,8224-g0,7397-g1,3601-g32,6377-g0,1192-g0,1046-g2,3601-g28,1258-g0,7598-g0&{expries}'
if __name__ == '__main__':
print('开始')
screenname = input('输入博主名:')
uid = input('输入博主u_id:')
main(screenname,u_id=uid,cursor='0')
print('结束')
一些技术积累
python脚本中的sys.path.append("…")
当我们导入一个模块时,import xxx,默认情况下python解释器会搜索当前目录、已安装的内置模块和第三方模块。
搜索路径存放在sys模块的path中,可以通过print(sys.path)打印查看
sys.path是一个列表,它里面包含了已经添加到系统的环境变量路径
当我们要引用自己写的模块时,如果模块和当前.py文件不在一个文件夹,也不在sys.path所在的目录列表中,要引用模块直接import模块名显然是行不通,所以就需要使用sys.path.append()的方式来将模块所在目录添加到python解释器可以搜索的目录中。
比如