由于最近在开发前端日志监控系统,对接口负载有较高的要求。
我们需要模拟高并发的环境下,接口承受的最大负载。
后端接口使用nodejs部署,服务器为1核2G腾讯云。
如果是ubuntu系统,可以用下面命令安装
sudo apt install apache2-utils
// 下面为输入ab后的帮助命令
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make at a time
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-n 表示请求的次数,-c 表示并发数,-t 表示持续的时间
模拟500个客户端,进行20000次的请求。
ab -c 500 -c 20000 'http://127.0.0.1'
下面为压测的结果
Concurrency Level: 500 // 发送的并发数
Time taken for tests: 3.301 seconds
Complete requests: 10000 // 总的请求数
Failed requests: 0
Total transferred: 980000 bytes
HTML transferred: 0 bytes
Requests per second: 3029.78 [#/sec] (mean) // 每秒处理的请求数
Time per request: 165.028 [ms] (mean) // 接口平均用时
Time per request: 0.330 [ms] (mean, across all concurrent requests)
Transfer rate: 289.96 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 28 162.8 0 1002
Processing: 41 76 30.3 74 880
Waiting: 41 76 30.3 74 880
Total: 41 104 176.4 75 1279
Percentage of the requests served within a certain time (ms)
50% 75
66% 77
75% 79
80% 81
90% 84
95% 87
98% 1071
99% 1241
100% 1279 (longest request)
结果显示,500个请求没什么压力,那我们换成3000个并发呢
如果执行请求提示
socket: Too many open files (24)
表示连接数被限制了,可以用下面命令修改
ulimit -n 65535
以下为3000并发时的测试结果
Concurrency Level: 3000
Time taken for tests: 27.179 seconds
Complete requests: 30000
Failed requests: 0
Total transferred: 2940000 bytes
HTML transferred: 0 bytes
Requests per second: 1103.80 [#/sec] (mean)
Time per request: 2717.874 [ms] (mean) // 处理时间变长
Time per request: 0.906 [ms] (mean, across all concurrent requests)
Transfer rate: 105.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 400 1872.4 0 15037
Processing: 23 209 519.4 132 26165
Waiting: 23 209 519.4 132 26165
Total: 45 609 2043.7 141 27168
Percentage of the requests served within a certain time (ms)
50% 141
66% 194
75% 216
80% 246
90% 495
95% 3095
98% 7318
99% 15102
100% 27168 (longest request)
可以看到处理能力出现了明显下降。
为了了解服务器瓶颈,需要看下云服务器的资源使用情况
cpu是满载了,还是内存用完了,还是硬盘io有问题,或者是程序没写好,出错了。进行逐一排查,榨干性能。
上图可以看出cpu出现了100%,导致处理能力下降。
由于我用的是单核处理器,所以我只开了一个进程,如果你是用多核处理器,可以用pm2开多个进程提高cpu的利用率。