IPSAN多链路部署
- 多路径IPSAN部署介绍
- 多路径IPSAN共享
- 多路径IPSAN客户端挂载
- 故障切换测试
一、多链路IPSAN部署介绍
单点故障在生产环境中是不被允许的,我们运维在设计架构的时候,如果无法解决单点故障,那么他设计的这个架构就无法满足高可用的需求,自然容灾性就无法谈起。同样我们在设计IPSAN架构的时候,也需要考虑单点故障的问题,因为一旦线路出现了问题,那么业务就会被中断了。这种问题我们是不能容忍的,这节课我就给大家说下如何实现IPSAN多链路共享。
二、多链路IPSAN实施
1)实验拓扑图
2)实验步骤
a、配置双链路网络 192.168.10.0 192.168.11.0两个网段线路。
服务器 | 网卡1IP地址 | 网卡2IP地址 |
---|---|---|
IPSAN服务器 | 192.168.11.241/24 | 192.168.10.241/24 |
IPSAN客户端 | 192.168.11.251/24 | 192.168.10.251/24 |
配置完成一定要测试连通性
IPSAN服务端
配置IP地址
[root@241 ~]# nmcli con add con-name eth0 ifname ens33 type 802-3-ethernet ipv4.method manual ipv4.add 192.168.11.241/24
[root@241 ~]# nmcli con add con-name eth1 ifname ens37 type 802-3-ethernet ipv4.method manual ipv4.add 192.168.10.241/24
重启网络,生效配置
[root@241 ~]# systemctl restart network
测试连通性,测试10、11段是否能ping通
[root@241 ~]# ping 192.168.10.251
PING 192.168.10.251 (192.168.10.251) 56(84) bytes of data.
64 bytes from 192.168.10.251: icmp_seq=1 ttl=64 time=0.472 ms
[root@241 ~]# ping -c1 192.168.11.251
PING 192.168.11.251 (192.168.11.251) 56(84) bytes of data.
64 bytes from 192.168.11.251: icmp_seq=1 ttl=64 time=0.483 ms
IPSAN客户端
配置IP地址
[root@251 ~]# nmcli con add con-name eth0 ifname ens33 type 802-3-ethernet ipv4.method manual ipv4.add 192.168.11.251/24
[root@251 ~]# nmcli con add con-name eth1 ifname ens37 type 802-3-ethernet ipv4.method manual ipv4.add 192.168.10.251/24
重启网络服务,生效配置
[root@251 ~]# systemctl restart network
测试网络连通性
[root@251 ~]# ping 192.168.10.241
PING 192.168.10.241 (192.168.10.241) 56(84) bytes of data.
64 bytes from 192.168.10.241: icmp_seq=1 ttl=64 time=0.239 ms
[root@251 ~]# ping -c1 192.168.11.241
PING 192.168.11.241 (192.168.11.241) 56(84) bytes of data.
64 bytes from 192.168.11.241: icmp_seq=1 ttl=64 time=0.298 ms
b、IPSAN服务器设置设备共享
IPSAN服务器配置磁盘共享,将本机的/dev/sdb1共享
客户端可以通过访问192.168.11.241:3260 192.168.10.241:3260这两个地址访问共享
共享设备iqn名称:iqn.2019-04.com.ayitula:storage
客户端的iqn名称为 iqn.2019-04.com.ayitula:client1
c、实现步骤
- 准备共享磁盘
- 安装target服务并启动
- 通过targetcli共享设备
1)给sdb磁盘分区
[root@241 ~]# fdisk /dev/sdb <<EOF
> n
> p
> 1
>
>
> w
> EOF
欢迎使用 fdisk (util-linux 2.23.2)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。
Device does not contain a recognized partition table
使用磁盘标识符 0x8e50306f 创建新的 DOS 磁盘标签。
命令(输入 m 获取帮助):Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): 分区号 (1-4,默认 1):起始 扇区 (2048-83886079,默认为 2048):将使用默认值 2048
Last 扇区, +扇区 or +size{K,M,G} (2048-83886079,默认为 83886079):将使用默认值 83886079
分区 1 已设置为 Linux 类型,大小设为 40 GiB
命令(输入 m 获取帮助):The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盘。
查看分区情况
[root@241 ~]# fdisk -l /dev/sdb
磁盘 /dev/sdb:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x8e50306f
设备 Boot Start End Blocks Id System
/dev/sdb1 2048 83886079 41942016 83 Linux
2) 安装target服务,并设置开机启动
[root@241 ~]# yum -y install targetcli
[root@241 ~]# systemctl enable target;systemctl start target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to
/usr/lib/systemd/system/target.service.
3)通过targetcli命令交互设置磁盘共享
[root@241 ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
将/dev/sdb1添加到iscsi后端存储
/> /backstores/block create block1 /dev/sdb1
Created block storage object block1 using /dev/sdb1.
设置IQN共享名称
/> /iscsi create iqn.2019-04.com.ayitula:storage
Created target iqn.2019-04.com.ayitula:storage.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
设置访问IQN共享的设备名称
/> /iscsi/iqn.2019-04.com.ayitula:storage/tpg1/acls create iqn.2019-04.com.ayitula:client1
Created Node ACL for iqn.2019-04.com.ayitula:client1
/> iscsi/iqn.2019-04.com.ayitula:storage/tpg1/luns create /backstores/block/block1
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2019-04.com.ayitula:client1
删除默认的访问地址0.0.0.0:3260
/> iscsi/iqn.2019-04.com.ayitula:storage/tpg1/portals/ delete 0.0.0.0 3260
Deleted network portal 0.0.0.0:3260
设置本机IPSAN客户端访问地址及端口
/> iscsi/iqn.2019-04.com.ayitula:storage/tpg1/portals create 192.168.11.241 3260
Using default IP port 3260
Created network portal 192.168.11.241:3260.
/> iscsi/iqn.2019-04.com.ayitula:storage/tpg1/portals create 192.168.10.241 3260
Using default IP port 3260
Created network portal 192.168.10.241:3260.
查看设置
/> ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- block1 ......................................................................... [/dev/sdb1 (0 bytes) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2019-04.com.ayitula:storage ................................................................................... [TPGs: 1]
| o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
| o- acls .......................................................................................................... [ACLs: 1]
| | o- iqn.2019-04.com.ayitula:client1 ...................................................................... [Mapped LUNs: 1]
| | o- mapped_lun0 ................................................................................ [lun0 block/block1 (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 .................................................................... [block/block1 (/dev/sdb1) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 2]
| o- 192.168.10.241:3260 .............................................................................................. [OK]
| o- 192.168.11.241:3260 .............................................................................................. [OK]
o- loopback ......................................................................................................... [Targets: 0]
/> exit
Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json
重启target服务
[root@241 ~]# systemctl restart target
三、多路径IPSAN客户端挂载
a、客户端连接服务器共享实现容灾有两种方式
1)多路径软件
2)udev
关于多路径软件和udev介绍
多路径软件Device Mapper Multipath概述
多路径软件Device Mapper Multipath(DM-Multipath)可以将服务器节点和存储阵列之间的多条I/O链路配置为一个单独的设备。这些I/O链路是由不同的线缆、交换机、控制器组成的SAN物理链路。Multipath将这些链路聚合在一起,生成一个单独的新的设备。
1.DM-Multipath概览:
(1)数据冗余
DM-Multipath可以实现在active/passive(主动/被动)模式下的灾难转移。在active/passive模式下,只有一半的链路在工作,如果链路上的某一部分(线缆、交换机、控制器)出现故障,DM-Multipath就会切换到另一半链路上。
(2)提高性能
DM-Multipath也可以配置为active/active模式,从而I/O任务以round-robin的方式分布到所有的链路上去。通过配置,DM-Multipath还可以检测链路上的负载情况,动态地进行负载均衡。
udev
udev 是Linux kernel 2.6系列的设备管理器。它主要的功能是管理/dev目录底下的设备节点。udev会根据用户添加/删除硬件的行为,自处理/dev目录下所有设备文件。
本文主要介绍的是IPSAN+多路径软件实现高可用
b、实现步骤
- 安装客户端程序,并启动
- 连接共享设备从两条线路
- 分区格式化
- 安装多路径软件
- 实现多路径负载均衡
- 测试
c、实现步骤
1)安装客户端程序并启动
[root@251 ~]# yum -y install iscsi-initiator-utils
[root@251 ~]# systemctl enable iscsi;systemctl start iscsi
2)设置客户端iqn名称
[root@251 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2019-04.com.ayitula:client1
3)发现服务器共享设备
[root@251 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.11.241 --discover
192.168.10.241:3260,1 iqn.2019-04.com.ayitula:storage
192.168.11.241:3260,1 iqn.2019-04.com.ayitula:storage
4)连接共享通过多路径
[root@251 ~]# iscsiadm --mode node --targetname iqn.2019-04.com.ayitula:storage --portal 192.168.11.241:3260 --login
Logging in to [iface: default, target: iqn.2019-04.com.ayitula:storage, portal: 192.168.11.241,3260] (multiple)
Login to [iface: default, target: iqn.2019-04.com.ayitula:storage, portal: 192.168.11.241,3260] successful.
[root@251 ~]# iscsiadm --mode node --targetname iqn.2019-04.com.ayitula:storage --portal 192.168.10.241:3260 --login
Logging in to [iface: default, target: iqn.2019-04.com.ayitula:storage, portal: 192.168.10.241,3260] (multiple)
Login to [iface: default, target: iqn.2019-04.com.ayitula:storage, portal: 192.168.10.241,3260] successful.
查看连接情况
[root@251 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 47.8G 0 lvm /
├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
└─centos-home 253:2 0 23.3G 0 lvm /home
sdb 8:16 0 40G 0 disk
sdc 8:32 0 40G 0 disk
sr0 11:0 1 4.2G 0 rom
发现多了两块磁盘sdb sdc
其实这两块磁盘是一块盘,表示成两块的原因是我通过两个线路连接了两次,所以看到是多了两块。
分区格式化的时候只需操作一个比如/dev/sdb就行了,
另一个刷一下分区表就行了
分区格式化,注意格式化不要用xfs,不支持多路径,我们用的是ext4。
[root@251 ~]# fdisk /dev/sdb <<EOF
> n
> p
> 1
>
>
> w
> EOF
欢迎使用 fdisk (util-linux 2.23.2)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。
Device does not contain a recognized partition table
使用磁盘标识符 0xf929d084 创建新的 DOS 磁盘标签。
命令(输入 m 获取帮助):Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): 分区号 (1-4,默认 1):起始 扇区 (8192-83884031,默认为 8192):将使用默认值 8192
Last 扇区, +扇区 or +size{K,M,G} (8192-83884031,默认为 83884031):将使用默认值 83884031
分区 1 已设置为 Linux 类型,大小设为 40 GiB
命令(输入 m 获取帮助):The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盘。
[root@251 ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
2621440 inodes, 10484480 blocks
524224 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
查看发现sdc没有分区,所以我们需要刷一下他的分区表
[root@251 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 47.8G 0 lvm /
├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
└─centos-home 253:2 0 23.3G 0 lvm /home
sdb 8:16 0 40G 0 disk
└─sdb1 8:17 0 40G 0 part
sdc 8:32 0 40G 0 disk
sr0 11:0 1 4.2G 0 rom
刷新分区表
[root@251 ~]# partprobe /dev/sdc
[root@251 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 47.8G 0 lvm /
├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
└─centos-home 253:2 0 23.3G 0 lvm /home
sdb 8:16 0 40G 0 disk
└─sdb1 8:17 0 40G 0 part
sdc 8:32 0 40G 0 disk
└─sdc1 8:33 0 40G 0 part
sr0 11:0 1 4.2G 0 rom
OK!刷新成功
5)安装多路径软件并启动
[root@251 ~]# yum install device-mapper-multipath -y
拷贝例子配置文件
[root@251 ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/
启动服务
[root@251 ~]# systemctl enable multipathd;systemctl start multipathd
6)查看多路径挂载
[root@251 ~]# multipath -ll
mpatha (360014052fa2bd7bb9b94af9bb8ea7644) dm-3 LIO-ORG ,block1
size=40G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active 活动线路
| `- 15:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled 备份线路
`- 16:0:0:0 sdc 8:32 active ready running
360014052fa2bd7bb9b94af9bb8ea7644 (远程存储设备id)dm-3 LIO-ORG(厂商) block1的ID
默认提供的是AB备份线路,不是负载均衡线路
我们可以通过修改配置文件来设置为负载均衡线路
[root@251 ~]# cat /etc/multipath.conf
multipaths {
multipath {
wwid 360014052fa2bd7bb9b94af9bb8ea7644 填写你的bock1的ID
alias multipath_study 别名随便起,有意义就行
path_grouping_policy multibus 多路径组策略
path_selector "round-robin 0" 负载均衡模式
failback manual
rr_weight priorities 按优先级轮询
no_path_retry 5 重试时间5s
}
multipath {
wwid 1DEC_____321816758474
alias red
}
}
重启服务查看
[root@251 ~]# systemctl restart multipathd
[root@251 ~]# multipath -ll
multipath_study (360014052fa2bd7bb9b94af9bb8ea7644) dm-3 LIO-ORG ,block1
size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 15:0:0:0 sdb 8:16 active ready running
`- 16:0:0:0 sdc 8:32 active ready running
发现已经变成负载均衡模式了。
[root@251 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 47.8G 0 lvm /
├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
└─centos-home 253:2 0 23.3G 0 lvm /home
sdb 8:16 0 40G 0 disk
└─multipath_study 253:3 0 40G 0 mpath
└─multipath_study1 253:4 0 40G 0 part
sdc 8:32 0 40G 0 disk
└─multipath_study 253:3 0 40G 0 mpath
└─multipath_study1 253:4 0 40G 0 part
sr0 11:0 1 4.2G 0 rom
原来显示的两块设备也变成了一个了,美美哒!
接下来挂载吧
[root@251 ~]# systemctl restart iscsi
[root@251 ~]# mkdir /opt/block1
[root@251 ~]# mount /dev/mapper/multipath_study1 /opt/block1/
[root@251 ~]# mount |grep block1
/dev/mapper/multipath_study1 on /opt/block1 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
挂载成功
四、故障切换测试
断掉一个线路,验证是否能继续工作。
在多路径环境下存入数据 test1 test2
[root@251 ~]# touch /opt/block1/test1
[root@251 ~]# touch /opt/block1/test2
[root@251 ~]# ls /opt/block1/
lost+found test1 test2
查看当前状态
[root@251 ~]# multipath -ll
multipath_study (360014052fa2bd7bb9b94af9bb8ea7644) dm-3 LIO-ORG ,block1
size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 15:0:0:0 sdb 8:16 active ready running
`- 16:0:0:0 sdc 8:32 active ready running
端口一个线路
[root@251 ~]# ifdown ens37
成功断开设备 'ens37'。
查看状态发现一路显示失败
[root@251 ~]# multipath -ll
multipath_study (360014052fa2bd7bb9b94af9bb8ea7644) dm-3 LIO-ORG ,block1
size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 15:0:0:0 sdb 8:16 active ready running 正常
`- 16:0:0:0 sdc 8:32 active faulty running 失败
继续存数据 test3 test4
[root@251 ~]# touch /opt/block1/test3
[root@251 ~]# touch /opt/block1/test4
查看两次数据是否都存在
[root@251 ~]# ls /opt/block1/
lost+found test1 test2 test3 test4
恢复线路
[root@251 ~]# ifup ens37
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/5)
负载恢复
[root@251 ~]# multipath -ll
multipath_study (360014052fa2bd7bb9b94af9bb8ea7644) dm-3 LIO-ORG ,block1
size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 15:0:0:0 sdb 8:16 active ready running
`- 16:0:0:0 sdc 8:32 failed ready running
数据依然存在
[root@251 ~]# ls /opt/block1/
lost+found test1 test2 test3 test4
测试完成
五、参考
/etc/multipath.conf配置参考手册
Attribute |
---|
Description | |
---|---|
wwid | Specifies the WWID of the multipath device to which themultipathattributes apply. This parameter is mandatory for this section of themultipath.conf file. |
alias | Specifies the symbolic name for the multipath device to which themultipath attributes apply. If you are usinguser_friendly_names, do not set this value tompath*****n*; this may conflict with an automatically assigned user friendly name and give you incorrect device node names. |
path_grouping_policy | Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include:failover = 1 path per priority groupmultibus = all valid paths in 1 priority groupgroup_by_serial = 1 priority group per detected serial numbergroup_by_prio = 1 priority group per path priority valuegroup_by_node_name = 1 priority group per target node name |
path_selector | Specifies the default algorithm to use in determining what path to use for the next I/O operation. Possible values include:round-robin 0: Loop through every path in the path group, sending the same amount of I/O to each.queue-length 0: Send the next bunch of I/O down the path with the least number of outstanding I/O requests.service-time 0: Send the next bunch of I/O down the path with the shortest estimated service time, which is determined by dividing the total size of the outstanding I/O to each path by its relative throughput. |
failback | Manages path group failback.A value of immediate specifies immediate failback to the highest priority path group that contains active paths.A value of manual specifies that there should not be immediate failback but that failback can happen only with operator intervention.A value of followoverspecifies that automatic failback should be performed when the first path of a path group becomes active. This keeps a node from automatically failing back when another node requested the failover.A numeric value greater than zero specifies deferred failback, expressed in seconds. |
prio | Specifies the default function to call to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitableprio value. Possible values include:const: Set a priority of 1 to all paths.emc: Generate the path priority for EMC arrays.alua: Generate the path priority based on the SCSI-3 ALUA settings.tpg_pref: Generate the path priority based on the SCSI-3 ALUA settings, using the preferred port bit.ontap: Generate the path priority for NetApp arrays.rdac: Generate the path priority for LSI/Engenio RDAC controller.hp_sw: Generate the path priority for Compaq/HP controller in active/standby mode.hds: Generate the path priority for Hitachi HDS Modular storage arrays. |
no_path_retry | A numeric value for this attribute specifies the number of times the system should attempt to use a failed path before disabling queueing.A value of fail indicates immediate failure, without queueing.A value of queueindicates that queueing should not stop until the path is fixed. |
rr_min_io | Specifies the number of I/O requests to route to a path before switching to the next path in the current path group. This setting is only for systems running kernels older that 2.6.31. Newer systems should userr_min_io_rq. The default value is 1000. |
rr_min_io_rq | Specifies the number of I/O requests to route to a path before switching to the next path in the current path group, using request-based device-mapper-multipath. This setting should be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io. The default value is 1. |
rr_weight | If set to priorities, then instead of sending rr_min_iorequests to a path before calling path_selector to choose the next path, the number of requests to send is determined byrr_min_io times the path's priority, as determined by theprio function. If set to uniform, all path weights are equal. |
flush_on_last_del | If set to yes, then multipath will disable queueing when the last path to a device has been deleted. |