B 站小视频网址:http://vc.bilibili.com/p/eden/rank#/?tab=%E5%85%A8%E9%83%A8
提取API
通过 F12 打开开发者模式,然后在 Networking -> Name 字段下找到这个链接
http://api.vc.bilibili.com/board/v1/ranking/top?page_size=10&next_offset=&tag=%E4%BB%8A%E6%97%A5%E7%83%AD%E9%97%A8&platform=pc
image
查看 Headers 属性
Request URL这个属性值,我们向下滑动加载视频的过程中,发现只有这段url是不变的
http://api.vc.bilibili.com/board/v1/ranking/top?
next_offset 会一直变化,我们可以猜测,这个可能就是获取下一个视频序号,我们只需要把这部分参数取出来,把 next_offset 写成变量值,用 JSON 的格式返回到目标网页即可
image
代码实现
通过上面的尝试写了段代码,发现 B 站在一定程度上做了反爬虫操作,所以我们需要先获取 headers 信息,否则下载下来的视频是空的,然后定义 params 参数存储 JSON 数据,然后通过 requests.get 去获取其参数值信息,用 JSON的格式返回到目标网页即可,实现代码如下:
defget_json(url):
headers = {
'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
params = {
'page_size':10,
'next_offset': str(num),
'tag':'今日热门',
'platform':'pc'
}
try:
html = requests.get(url,params=params,headers=headers)
returnhtml.json()
exceptBaseException:
print('request error')
pass
为了能够清楚的看到下载的情况,添加一个下载器上去,实现代码如下:
def download(url,path):
start= time.time() # 开始时间
size=0
headers = {
'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
response = requests.get(url,headers=headers,stream=True) # stream属性必须带上
chunk_size =1024# 每次下载的数据大小
content_size =int(response.headers['content-length']) # 总大小
ifresponse.status_code ==200:
print('[文件大小]:%0.2f MB'%(content_size / chunk_size /1024)) # 换算单位
withopen(path,'wb')asfile:
fordatainresponse.iter_content(chunk_size=chunk_size):
file.write(data)
size+=len(data) # 已下载的文件大小
效果如下:
image
将上面的代码进行汇总,整个实现过程如下:
#-*-coding:utf-8-*-
importrequests
importrandom
importtime
defget_json(url):
headers = {
'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
params = {
'page_size':10,
'next_offset': str(num),
'tag':'今日热门',
'platform':'pc'
}
try:
html = requests.get(url,params=params,headers=headers)
returnhtml.json()
exceptBaseException:
print('request error')
pass
defdownload(url,path):
start = time.time()# 开始时间
size =0
headers = {
'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
response = requests.get(url,headers=headers,stream=True)# stream属性必须带上
chunk_size =1024# 每次下载的数据大小
content_size = int(response.headers['content-length'])# 总大小
ifresponse.status_code ==200:
print('[文件大小]:%0.2f MB'%(content_size / chunk_size /1024))# 换算单位
withopen(path,'wb')asfile:
fordatainresponse.iter_content(chunk_size=chunk_size):
file.write(data)
size += len(data)# 已下载的文件大小
if__name__ =='__main__':
foriinrange(10):
url ='http://api.vc.bilibili.com/board/v1/ranking/top?'
num = i*10+1
html = get_json(url)
infos = html['data']['items']
forinfoininfos:
title = info['item']['description']# 小视频的标题
video_url = info['item']['video_playurl']# 小视频的下载链接
print(title)
# 为了防止有些视频没有提供下载链接的情况
try:
download(video_url,path='%s.mp4'%title)
print('成功下载一个!')
exceptBaseException:
print('凉凉,下载失败')
pass
time.sleep(int(format(random.randint(2,8))))# 设置随机等待时间
爬取效果图如下:
image