1、破坏mbr表并修复
- 将10.0.0.18主机上的MBR分区表复制到10.0.0.28主机上
[root@node1 ~]# dd if=/dev/sda of=/root/dpt.img bs=1 count=64 skip=446
64+0 records in
64+0 records out
64 bytes copied, 0.000107281 s, 597 kB/s
[root@node1 ~]# scp dpt.img 10.0.0.28:/root/
dpt.img 100% 64 48.3KB/s 00:00
- 破坏MBR分区表
[root@node1 ~]# dd if=/dev/zero of=/dev/sda bs=1 count=64 seek=446
64+0 records in
64+0 records out
64 bytes copied, 0.000175485 s, 365 kB/s
reboot无法启动
- 用光盘启动,进入rescue mode,选第3项skip to shell
配置网络
ip a a 10.0.0.18/24 dev ens33
scp 10.0.0.28:/root/dpt.img ./
恢复MBR分区表
dd if=dpt.img of=/dev/sda bs=1 seek=446
exit
2、总结RAID的各个级别及其组合方式和性能的不同。
- 常用级别:RAID 0、RAID 1、RAID 5、RAID 6、RAID 01、RAID 10、RAID 50、RAID 60
RAID 0:读、写性能提升;可用空间:Nmin(S1,S2,...);无容错能力;最少磁盘数:1+
RAID 1:读性能提升、写性能略有下降;可用空间:1min(S1,S2,...);有冗余能力;最少磁盘数:2+
RAID 5:读、写性能提升;可用空间:(N-1)min(S1,S2,...);有容错能力:允许最多1块磁盘损坏;最少磁盘数:3, 3+
RAID 6:读、写性能提升;可用空间:(N-2)min(S1,S2,...);有容错能力:允许最多2块磁盘损坏;最少磁盘数:4, 4+
RAID 01:读、写性能提升;多块磁盘先实现RAID0,再组合成RAID1
RAID 10:读、写性能提升;多块磁盘先实现RAID1,再组合成RAID0
RAID 50: 读、写性能提升;多块磁盘先实现RAID5,再组合成RAID0
RAID 60: 读、写性能提升;多块磁盘先实现RAID6,再组合成RAID0
3、创建一个2G的文件系统,块大小为2048byte,预留1%可用空间,文件系统 ext4,卷标为TEST,要求此分区开机后自动挂载至/test目录,且默认有acl挂载选项
- 在sdb磁盘上创建一个2G的分区
[root@node1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xb84667c5.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039): +2G
Created a new partition 1 of type 'Linux' and of size 2 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]# lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 20G 0 disk
└─sdb1 8:17 0 2G 0 part
- 对新建的分区进行格式化
[root@node1 ~]# mkfs.ext4 -b 2048 -m 1 -L TEST /dev/sdb1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 1048576 2k blocks and 131072 inodes
Filesystem UUID: 717450f5-dea9-4dea-8868-b09c739e6170
Superblock backups stored on blocks:
16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
- 把分区挂载到目录并设置开机自动挂载
[root@node1 ~]# mkdir /test
[root@node1 ~]# echo `blkid -s UUID /dev/sdb1 |sed -nr 's#.*(UUID.*)#\1#p'` /test ext4 acl 0 0 >> /etc/fstab
[root@node1 ~]# mount -a
[root@node1 ~]# lsblk /dev/sdb1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb1 8:17 0 2G 0 part /test
4、创建一个至少有两个PV组成的大小为20G的名为testvg的VG;要求PE大小 为16MB, 而后在卷组中创建大小为5G的逻辑卷testlv;挂载至/users目录
[root@node1 ~]# pvcreate /dev/sdb /dev/sdc
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
[root@node1 ~]# vgcreate testvg -s 16M /dev/sdb /dev/sdc
Volume group "testvg" successfully created
[root@node1 ~]# vgdisplay testvg
--- Volume group ---
VG Name testvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size <39.97 GiB
PE Size 16.00 MiB
Total PE 2558
Alloc PE / Size 320 / 5.00 GiB
Free PE / Size 2238 / <34.97 GiB
VG UUID uC0Chs-xn91-UmLh-pOhe-u676-YSs8-uGP4oH
[root@node1 ~]# lvcreate -n testlv -L 5G testvg
Logical volume "testlv" created.
[root@node1 ~]# lvdisplay /dev/testvg/testlv
--- Logical volume ---
LV Path /dev/testvg/testlv
LV Name testlv
VG Name testvg
LV UUID mQWhVL-qRIJ-Tk2P-VcD3-QKVJ-m5TN-xqgoSS
LV Write Access read/write
LV Creation host, time node1, 2021-03-28 21:44:01 +0800
LV Status available
# open 1
LV Size 5.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
[root@node1 ~]# mkfs.ext4 /dev/testvg/testlv
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 522f0a3b-110b-49ef-a7bc-3638932c0fa3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
[root@node1 ~]# mkdir /users
[root@node1 ~]# mount /dev/testvg/testlv /users
[root@node1 ~]# mount |grep users
/dev/mapper/testvg-testlv on /users type ext4 (rw,relatime)