前言:想到之前用sysbench测试过服务器硬盘IOPS,但是没有指定数据块大小,所以最近又另外找了fio工具来测试服务器磁盘IOPS。
一、fio安装
去官网下载最新fio工具,编译安装即可
解压
tar -zxvf fio-2.1.10.tar.gz
安装
make
make install
二、测试
filename=/dev/emcpowerb 支持文件系统或者裸设备,-filename=/dev/sda2或-filename=/dev/sdb
direct=1 测试过程绕过机器自带的buffer,使测试结果更真实
rw=randwread 测试随机读的I/O
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机混合写和读的I/O
rw=read 测试顺序读的I/O
rw=write 测试顺序写的I/O
rw=rw 测试顺序混合写和读的I/O
bs=4k 单次io的块文件大小为4k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试
numjobs=30 本次的测试线程为30
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止
ioengine=psync io引擎使用pync方式,如果要使用libaio引擎,需要yum install libaio-devel包
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息
此外
lockmem=1g 只使用1g内存进行测试
zero_buffers 用0初始化系统buffer
nrfiles=8 每个进程生成文件的数量
实际测试:
[root@Mariadb-04 fio-2.1.10]# /usr/local/bin/fio -filename=/storage/test_randread -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=120 -group_reporting -name=mytest
三、结果解读
...
fio-2.1.10
Starting 30 threads
mytest: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 30 (f=30): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [96239KB/42997KB/0KB /s] [6014/2687/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=32902: Wed Apr 25 11:01:28 2018
read : io=3498.2MB, bw=29841KB/s, iops=1865, runt=120039msec
clat (usec): min=95, max=7180.4K, avg=13934.48, stdev=106872.83
lat (usec): min=95, max=7180.4K, avg=13934.66, stdev=106872.84
clat percentiles (usec):
| 1.00th=[ 115], 5.00th=[ 133], 10.00th=[ 161], 20.00th=[ 235],
| 30.00th=[ 318], 40.00th=[ 652], 50.00th=[ 5024], 60.00th=[ 8640],
| 70.00th=[12864], 80.00th=[19840], 90.00th=[32384], 95.00th=[46336],
| 99.00th=[84480], 99.50th=[107008], 99.90th=[209920], 99.95th=[1253376],
| 99.99th=[5210112]
bw (KB /s): min= 2, max= 5447, per=4.09%, avg=1221.31, stdev=688.71
write: io=1513.9MB, bw=12914KB/s, iops=807, runt=120039msec
clat (usec): min=179, max=7160.4K, avg=4952.37, stdev=109858.64
lat (usec): min=180, max=7160.4K, avg=4954.63, stdev=109858.67
clat percentiles (usec):
| 1.00th=[ 286], 5.00th=[ 326], 10.00th=[ 358], 20.00th=[ 406],
| 30.00th=[ 446], 40.00th=[ 494], 50.00th=[ 564], 60.00th=[ 700],
| 70.00th=[ 1192], 80.00th=[ 4896], 90.00th=[ 8512], 95.00th=[10048],
| 99.00th=[16064], 99.50th=[18560], 99.90th=[44288], 99.95th=[1253376],
| 99.99th=[7176192]
bw (KB /s): min= 2, max= 2821, per=4.14%, avg=534.19, stdev=334.76
lat (usec) : 100=0.01%, 250=16.05%, 500=21.96%, 750=9.53%, 1000=2.97%
lat (msec) : 2=3.48%, 4=3.07%, 10=16.50%, 20=12.44%, 50=11.02%
lat (msec) : 100=2.51%, 250=0.37%, 500=0.02%, 1000=0.01%, 2000=0.01%
lat (msec) : >=2000=0.05%
cpu : usr=0.04%, sys=0.25%, ctx=325441, majf=0, minf=6
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=223880/w=96886/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=3498.2MB, aggrb=29840KB/s, minb=29840KB/s, maxb=29840KB/s, mint=120039msec, maxt=120039msec
WRITE: io=1513.9MB, aggrb=12913KB/s, minb=12913KB/s, maxb=12913KB/s, mint=120039msec, maxt=120039msec
Disk stats (read/write):
dm-1: ios=231005/101135, merge=0/0, ticks=3413245/925615, in_queue=4340282, util=100.00%, aggrios=231734/101432, aggrmerge=32/11, aggrticks=3416960/876818, aggrin_queue=4293134, aggrutil=100.00%
dm-0: ios=231734/101432, merge=32/11, ticks=3416960/876818, in_queue=4293134, util=100.00%, aggrios=231734/101432, aggrmerge=0/0, aggrticks=2391410/76306, aggrin_queue=2467258, aggrutil=100.00%
sdb: ios=231734/101432, merge=0/0, ticks=2391410/76306, in_queue=2467258, util=100.00%
这里我们只需要关注read iops=1865与write iops=807就可以了。