思路:从返回的jason中解析出答案id,拼接出答案URL,再依次解析出每个答案下的图片地址,进行下载
具体如下:
打开问题页面会请求一个很长的url
如下
大部分不用动,根据limit,每次将offset累加,去循环请求
这个请求会得到一个json
从json里解析出每个答案的id
根据id,拼接出答案的url
https://www.zhihu.com/question/questionId/answer/answerId
def getAnswerId(questionUrl):
agent = random.sample(user_agent_list, 1)[0]
headers = {'User-Agent': agent, "Cookie": cookie}
r = requests.get(questionUrl, headers=headers)
page = r.text
j = json.loads(page)
for i in range(5): #假设limit=5
answerId = j['data'][i]['id']
global answerPath
answerPath = os.path.join(imageDir, 'answer' + str(answerId))
if not os.path.exists(answerPath):
os.makedirs(answerPath) # 用answerId作为文件夹名,创建答案的存放路径
answerURL = 'https://www.zhihu.com/question/%s/answer/%s' % (str(questionId), str(answerId))
time.sleep(1) # 睡眠
spider(answerURL)
依次请求每个答案的url
从response中解析出图片的地址,真实地址藏在data-original
def spider(url):
agent = random.sample(user_agent_list, 1)[0]
headers = {'User-Agent': agent, "Cookie": cookie}
r = requests.get(url, headers=headers)
page = r.text
soup = BeautifulSoup(page, 'html.parser')
value = soup.find_all('img', class_='origin_image zh-lightbox-thumb')
for item in value:
item = item.get("data-original") # 图片的真实地址
request_download(item) # 下载图片
time.sleep(random.randint(2, 6))
拿到图片的url,就可以下载到本地了
def request_download(imageUrl):
imageName = imageUrl.split('/')[-1]
print(imageName)
imagePath = os.path.join(answerPath, imageName)
print('downloading')
r = requests.get(imageUrl)
with open(imagePath, 'wb') as f:
f.write(r.content)
基本就是这些
也可以再加个多线程
因为手头上暂时没代理池,就不写了