前言
写统计脚本系列,是把工作中的一些需求记录下来,并且写下优化手段。算是一种技术总结。
- 需求描述
从一个支持亚马逊s3协议的资源池中,下载该池特定的业务的元信息。然后找出所有30天符合业务逻辑的条目,用于执行删除。
下载代码略过。大概操作如下:池中文件前缀为 abc/hash-code 其中abc为业务线,对应了资源池生命周期(为什么不用桶生命周期管理,の,池的管理并发慢于文件的增加速度,需要手动管理。(。•́︿•̀。))。 - 文件列表下载难点在于:
- 数据库中无法获取历史列表,数据库严格按照周期删除。
- 只能通过资源池访问日志或者是资源池元数据中获取文件列表。然而池的元数据aws sdk中只有单线程,死循环下载的方法(每次获取一千条,需要下载12亿条,测试下载3亿条该方法需要40几小时)。利用历史访问日志获取所有文件id,历史文件下载,保存需要4T空间,浪费大量资源,并且大量文件id已经被删除过了,放弃日志方案。
- 文件下载优化手段(改良aws s3 getbucklist方法):
- 第一层拆分,每一个桶对应不同的线程。
- 第二层拆分,通过s3 sdk中setPrefix 方法,业务拆分不同生命周期,不同hash-code范围的方式,进一步提高桶内并发。拆分例子如下
abc10/00000019-493b-473a-b90c-d4423c3fe3bd
abc 为业务线,后面的数字为保存日期,hash域拆分为00... 01...。这样拆分后并发达到了桶 * 业务线 * 保存日期 * 256。 - 12亿条,200G数据,100个线程的线程池,3小时即可拉取所有数据。
- 找出30天前的数据(利用字符串代替时间转换,将查找时间从1900秒优化到400秒,从cpu瓶颈到io瓶颈,python为例)。
基准环境如下:
查找数据量12亿条,200G。输出结果,3.8亿条,50g。测试机器单盘2t sas,54核,256g ram,进程池大小为48,内存使用大概为10G。
未优化前做法:源数据为多个文件(大小大致相等),实现无锁,多进程(python需要多进程)消费,字符串中时间转为内部数据结构后对比。观察单进程cpu使用达到100%,io wait 忽略不计,遂优化cpu使用。代码如下(ps必须先把文件夹建立好,直接执行,会出现冲突。多个进程同时创建文件,会导致一个进程失败。):
# -*- coding: utf-8 -*-
import datetime as dt
import os
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import Pool
import time
input_size = 256*1024*1024
def get_path():
path_list = []
with open("path_list", "r") as f:
temp = f.readlines()
for each in temp:
size, path = each.split("\t")
if size !="0":
path_list.append(path.strip("\n"))
return path_list
def makedir(file):
for each in file
try:
start_time = dt.datetime(year=2019,month=8,day=11,hour=0,minute=0,second=0)
output_path = file.replace("family","family/temp")
dir_path = "/".join(output_path.split("/")[:-1])
if os.path.exists(dir_path) == False:
os.makedirs(dir_path)
except Exception as e:
print str(e)
def process(file):
try:
with open(file, "r") as f, open(output_path+"output", "w") as output:
pass
judge = f.readlines(input_size)
output_buff = []
day_ago = 10
if "H10" in file:
day_ago = 10
elif "H20" in file:
day_ago = 20
elif "H30" in file:
day_ago = 30
elif "H40" in file:
day_ago = 40
elif "H50" in file:
day_ago = 50
elif "H60" in file:
day_ago = 60
while judge != None and judge != []:
for each in judge:
key, etag, bucket, modify_date, size = each.split("\t")
if dt.datetime.strptime(modify_date, "%a %b %d %H:%M:%S CST %Y")<start_time-dt.timedelta(days=day_ago):
output_buff.append(each)
output.writelines(output_buff)
output_buff = []
judge = f.readlines(input_size)
except Exception as e:
print str(e)
def bulk_execute(method, method_par):
start = time.time()
pool = Pool(48)
pool.map(method, method_par)
pool.close()
pool.join()
print time.time()-start
def run():
# process()
path_list = get_path()
makedir(path_list)
bulk_execute(process, path_list)
if __name__ == "__main__":
run()
优化:生成一个保留日期字典,直接对比字符串。观察单核进程cpu使用下降一半,部分进程出现io wait ,磁盘io接近上限((ÒωÓױ),突然有种冲动,想将磁盘退回,拆分为多盘,提高io。•̥́ ˍ •̀)。代码如下:
# -*- coding: utf-8 -*-
import datetime as dt
import os
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import Pool
import time
input_size = 256*1024*1024
def get_path():
path_list = []
with open("path_list", "r") as f:
temp = f.readlines()
for each in temp:
size, path = each.split("\t")
if size !="0":
path_list.append(path.strip("\n"))
return path_list
def makedir(file):
for each in file
try:
start_time = dt.datetime(year=2019,month=8,day=11,hour=0,minute=0,second=0)
output_path = file.replace("family","family/temp")
dir_path = "/".join(output_path.split("/")[:-1])
if os.path.exists(dir_path) == False:
os.makedirs(dir_path)
except Exception as e:
print str(e)
def process(file):
try:
with open(file, "r") as f, open(output_path+"output", "w") as output:
pass
judge = f.readlines(input_size)
output_buff = []
day_ago = 10
if "H10" in file:
day_ago = 10
elif "H20" in file:
day_ago = 20
elif "H30" in file:
day_ago = 30
elif "H40" in file:
day_ago = 40
elif "H50" in file:
day_ago = 50
elif "H60" in file:
day_ago = 60
# 将时间对比变为字符串对比
ban_dict = set([(dt.datetime(2019,9,14)-dt.timedelta(each)).strftime("%a %b %d %H:%M:%S CST %Y")[:10] for each in range(day_ago+1+34) ])
while judge != None and judge != []:
for each in judge:
key, etag, bucket, modify_date, size = each.split("\t")
if modify_date[:10] not in ban_dict:
output_buff.append(each)
output.writelines(output_buff)
output_buff = []
judge = f.readlines(input_size)
except Exception as e:
print str(e)
def bulk_execute(method, method_par):
start = time.time()
pool = Pool(48)
pool.map(method, method_par)
pool.close()
pool.join()
print time.time()-start
def run():
# process()
path_list = get_path()
makedir(path_list)
bulk_execute(process, path_list)
if __name__ == "__main__":
run()