上一章节我们安装了KVM和KVM管理工具,接下来我们配置KVM宿主机(hosts)为虚拟机(guests)提供网络以及存储资源。
KVM安装完成后默认会虚拟机(guests)创建文件系统/var/lib/libvirt/images
,如果您打算在宿主机(hosts)之间迁移VM(guests)的话,你需要配置共享存储,例如NFS、NAS或Ceph等。
同时你必须配置网桥,以便VM能够与外部通讯。接下来我们将描述如何配置网桥以及如何为VM及镜像创建存储资源池。
如何配置网桥
如果您发现您的机器上有"virbr0"网卡,您可以使用命令virsh net-destroy default
去删除它或者选择忽略,这个网卡是KVM安装时自动创建的NAT网卡。
下面我们来手动配置网桥桥接到网卡"eth1",步骤如下:
- 登录到宿主机,检查网络配置;
[root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.88.144/24 brd 192.168.88.255 scope global dynamic eth0 valid_lft 1248sec preferred_lft 1248sec inet6 fe80::ec2f:eb00:c9f6:4250/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff inet 192.168.57.254/24 brd 192.168.57.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::e096:d49b:5afe:d1ca/64 scope link valid_lft forever preferred_lft forever
- 配置网卡"eth1"桥接网桥;
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=static IPV4_FAILURE_FATAL=no NAME=eth1 DEVICE=eth1 ONBOOT=yes BRIDGE=br0
- 创建网桥"br0"的配置文件;
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-br0 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0 TYPE="Bridge" DEVICE=br0 BOOTPROTO=static IPADDR=192.168.57.254 NETMASK=255.255.255.0 ONBOOT="yes" DELAY=0 STP=0
- 开启ipv4路由转发功能;
[root@localhost ~]# grep "net.ipv4.ip_forward" /etc/sysctl.conf || echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf [root@localhost ~]# sysctl -p net.ipv4.ip_forward = 1
- 重启网络服使网桥"br0"配置生效;
[root@localhost ~]# systemctl network restart Unknown operation 'network'. [root@localhost ~]# systemctl restart network [root@localhost ~]# systemctl status network ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled) Active: active (exited) since Thu 2017-06-08 14:37:50 CST; 6s ago Docs: man:systemd-sysv-generator(8) Process: 6568 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS) Process: 6793 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Jun 08 14:37:49 localhost.localdomain systemd[1]: Starting LSB: Bring up/down networking... Jun 08 14:37:49 localhost.localdomain network[6793]: Bringing up loopback interface: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface eth0: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3) Jun 08 14:37:50 localhost.localdomain network[6793]: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface eth1: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) Jun 08 14:37:50 localhost.localdomain network[6793]: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface br0: [ OK ] Jun 08 14:37:50 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking. [root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.88.144/24 brd 192.168.88.255 scope global dynamic eth0 valid_lft 1786sec preferred_lft 1786sec inet6 fe80::ec2f:eb00:c9f6:4250/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff inet 192.168.57.254/24 brd 192.168.57.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fef5:9fbc/64 scope link valid_lft forever preferred_lft forever
- 检查网桥信息,看上去“br0”已经桥接到“eth1”了;
[root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.000c29f59fbc no eth1
配置存储资源池
KVM默认的存储资源池的路径为/var/lib/libvirt/images
。
用户没有强要求使用共享存储来存储镜像(images),配置共享存储能够很方便实现在宿主机之间迁移vm的功能,KVM已经支持vm热迁移功能,这点类似于VMware的vmotion功能。
KVM支持多种共享存储做为存储资源池。
本章我们使用NFS,首先我们来搭建一个NFS环境。
NFS环境搭搭建
# 安装nfs rpm包
[root@nfs-server ~]# yum install -y nfs-utils
# 配置共享目录,允许本网段访问
[root@nfs-server ~]# mkdir /images
[root@nfs-server ~]# echo -e "/images\t192.168.57.0/24(rw,no_root_squash)" > /etc/exports
# 启动NFS服务
[root@nfs-server ~]# systemctl enable rpcbind nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs-server ~]# systemctl start rpcbind nfs-server
[root@nfs-server ~]# systemctl status rpcbind nfs-server
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Active: active (running) since Thu 2017-06-08 15:38:46 CST; 15s ago
Process: 3461 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 3467 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─3467 /sbin/rpcbind -w
Jun 08 15:38:46 nfs-server systemd[1]: Starting RPC bind service...
Jun 08 15:38:46 nfs-server systemd[1]: Started RPC bind service.
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2017-06-08 15:38:57 CST; 5s ago
Process: 3473 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 3472 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3473 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Jun 08 15:38:57 nfs-server systemd[1]: Starting NFS server and services...
Jun 08 15:38:57 nfs-server systemd[1]: Started NFS server and services.
# 放通防火墙
[root@nfs-server ~]# firewall-cmd --add-service=nfs --permanent
success
[root@nfs-server ~]# firewall-cmd --reload
success
KVM存储池配置
-
配置KVM宿主机连接NFS Server.
[root@kvm-node1 ~]# yum install nfs-utils -y [root@kvm-node1 ~]# systemctl enable rpcbind && systemctl start rpcbind [root@kvm-node1 ~]# systemctl status rpcbind ● rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled) Active: active (running) since Thu 2017-06-08 15:56:56 CST; 16s ago Process: 8744 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS) Main PID: 8745 (rpcbind) CGroup: /system.slice/rpcbind.service └─8745 /sbin/rpcbind -w Jun 08 15:56:56 kvm-node1 systemd[1]: Starting RPC bind service... Jun 08 15:56:56 kvm-node1 systemd[1]: Started RPC bind service. [root@kvm-node1 ~]# mount -t nfs 192.168.57.200:/images /var/lib/libvirt/images [root@kvm-node1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 17G 4.2G 13G 25% / devtmpfs devtmpfs 478M 0 478M 0% /dev tmpfs tmpfs 489M 88K 489M 1% /dev/shm tmpfs tmpfs 489M 7.1M 482M 2% /run tmpfs tmpfs 489M 0 489M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 140M 875M 14% /boot tmpfs tmpfs 98M 8.0K 98M 1% /run/user/0 192.168.57.200:/images nfs4 17G 4.2G 13G 25% /var/lib/libvirt/images [root@kvm-node1 ~]# echo -e "192.168.57.200:/images\t/var/lib/libvirt/images\tnfs\tdefaults\t0\t0" >> /etc/fstab
-
配置KVM存储资源池
[root@kvm-node1 ~]# virsh pool-list Name State Autostart ------------------------------------------- [root@kvm-node1 ~]# virsh pool-build default Pool default built [root@kvm-node1 ~]# virsh pool-start default Pool default started [root@kvm-node1 ~]# virsh pool-list Name State Autostart ------------------------------------------- default active yes [root@kvm-node1 ~]# virsh pool-info default Name: default UUID: d84dc74b-b0f4-4197-92b0-d9025620f0a4 State: running Persistent: yes Autostart: yes Capacity: 16.99 GiB Allocation: 4.10 GiB Available: 12.88 GiB [root@kvm-node1 ~]# df -Th /var/lib/libvirt/images/ Filesystem Type Size Used Avail Use% Mounted on 192.168.57.200:/images nfs4 17G 4.2G 13G 25% /var/lib/libvirt/images