2021-01-26 RAID and LVM

RAID

Redundant Array of Independent Disks

RAID ("Redundant Array of Inexpensive Disks" or "Redundant Array of Independent Disks") is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as "single large expensive disk" (SLED).

Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.


Frequently used RAID

refs: https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0

RAID Level Min Disks Available Space Performance Security Feature
0 2 N N low max speed/performance,no backup
1 2 N/2 N high max security, poor space usage
5 3 N-1 N-1 medium medium security, one drive faialed, data can be restored
10 4 N/2 N/2 high combo of 1+0, max speed + data backup
  • Raid 0


    image.png

RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail; as a result of having data striped across all disks, the failure will result in total data loss. This configuration is typically implemented having speed as the intended goal.[2][3] RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks.[4]

  • Raid 1


    image.png

RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.[13][14]
The array will continue to operate so long as at least one member drive is operational.[15]

  • Raid 5


    image.png

RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.[5] RAID 5 requires at least three disks.[22]

  • Raid 1+0


    image.png

RAID 10, also called RAID 1+0 and sometimes RAID 1&0, is similar to RAID 01 with an exception that two used standard RAID levels are layered in the opposite order; thus, RAID 10 is a stripe of mirrors.[3]

RAID 10, as recognized by the storage industry association and as generally implemented by RAID controllers, is a RAID 0 array of mirrors, which may be two- or three-way mirrors,[6] and requires a minimum of four drives. However, a nonstandard definition of "RAID 10" was created for the Linux MD driver; Linux "RAID 10" can be implemented with as few as two disks. Implementations supporting two disks such as Linux RAID 10 offer a choice of layouts.[7] Arrays of more than four disks are also possible.

According to manufacturer specifications and official independent benchmarks, in most cases RAID 10[8] provides better throughput and latency than all other RAID levels[9] except RAID 0 (which wins in throughput).[10] Thus, it is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.[11]

About Raid 5

The benefits of RAID 5 primarily come from its combined use of disk striping and parity. Striping is the process of storing consecutive segments of data across different storage devices, and allows for better throughput and performance. Disk striping alone does not make an array fault tolerant, however. Disk striping combined with parity provides RAID 5 with redundancy and reliability.

RAID 5 used parity instead of mirroring for data redundancy. When data is written to a RAID 5 drive, the system calculates parity and writes that parity into the drive. While mirroring maintains multiple copies of data in each volume to use in case of failure, RAID 5 can rebuild a failed drive using the parity data, which is not kept on a fixed single drive.

RAID 5 groups have a minimum of three hard disk drives (HDDs) and no maximum. Because the parity data is spread across all drives, RAID 5 is considered one of the most secure RAID configurations.

About JBOD

JBOD (abbreviated from "Just a Bunch Of Disks"/"Just a Bunch Of Drives") is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into one or more logical volumes using a volume manager like LVM or mdadm, or a device-spanning filesystem like btrfs; such volumes are usually called "spanned" or "linear | SPAN | BIG".[2][3][4] A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume.[5][6] Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level.

RAID 10 vs RAID 01

Performance on both RAID 10 and RAID 01 will be the same.
The storage capacity on these will be the same.
The main difference is the fault tolerance level.

For RAID 10 if any one of the disks failed, the data still can be recovered, For RAID 01 if any one of the disks failed, the data will be lost since the former is consist of raid 1 first then raid 0 and raid 01 is vise-versa


Deploy RAID in RHEL8

  • mdadm

manage MD devices aka Linux Software RAID
Multiple Disk and Device Management

parameter outcome
-a detect the devices
-n assign devices
-l assign raid level
-C create raid device
-v display process
-f mimic the fault device
-r remove device
-Q short info
-D device details
-S stop the active array
  • raid 10 demo
  1. Create raid 10
    -Cv for assigning the device name
    -n for the num of disks, -l for the raid level
    raid 10 create raid 1 for mirroring space, then create raid 0 for max speed
[root@linuxprobe ~]# mdadm -Cv /dev/md0 -n 4 -l 10 /dev/sdc /dev/sdd /dev/sde /dev/sdf
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
  1. Check raid info
[root@linuxprobe ~]# mdadm -Q /dev/md0
/dev/md0: 9.99GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.
  1. Display detail info
[root@linuxprobe md0]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jan 26 22:14:11 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Jan 26 23:17:15 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : linuxprobe.com:0  (local to host linuxprobe.com)
              UUID : b85eb10d:234e618d:1daf12f8:b034288e
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync set-A   /dev/sdc
       1       8       48        1      active sync set-B   /dev/sdd
       2       8       64        2      active sync set-A   /dev/sde
       3       8       80        3      active sync set-B   /dev/sdf

  1. Format the raid block
[root@linuxprobe ~]# mkfs.ext4 /dev/md0
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: b99f77f3-a6ea-4e26-879b-e1eb0a7d8d62
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 
  1. Mount the raid device
[root@linuxprobe ~]# mount /dev/md0 /media/raid/md0/
[root@linuxprobe md0]# ls -la /media/raid/md0/
total 20
drwxr-xr-x. 3 root root  4096 Jan 26 23:14 .
drwxr-xr-x. 3 root root    17 Jan 26 23:16 ..
drwx------. 2 root root 16384 Jan 26 23:14 lost+found
  1. add path to fstab and check the usage
echo "/dev/md0 /media/raid/md0 ext4 defaults 0 0" >> /etc/fstab
df -h /dev/md
  1. mimic the fault device
[root@linuxprobe md0]# mdadm -f /dev/md0 /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0

[root@linuxprobe md0]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jan 26 22:14:11 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Jan 26 23:32:29 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : linuxprobe.com:0  (local to host linuxprobe.com)
              UUID : b85eb10d:234e618d:1daf12f8:b034288e
            Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       48        1      active sync set-B   /dev/sdd
       2       8       64        2      active sync set-A   /dev/sde
       3       8       80        3      active sync set-B   /dev/sdf

       0       8       32        -      faulty   /dev/sdc


[root@linuxprobe md0]# mdadm /dev/md0 -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0


# append new hard disk to the rarid module
[root@linuxprobe md0]# mdadm /dev/md0 -a /dev/sdg;mdadm -D /dev/md0
mdadm: Cannot open /dev/sdg: Device or resource busy
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jan 26 22:14:11 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Jan 26 23:55:16 2021
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 19% complete

              Name : linuxprobe.com:0  (local to host linuxprobe.com)
              UUID : b85eb10d:234e618d:1daf12f8:b034288e
            Events : 25

    Number   Major   Minor   RaidDevice State
       4       8       96        0      spare rebuilding   /dev/sdg
       1       8       48        1      active sync set-B   /dev/sdd
       2       8       64        2      active sync set-A   /dev/sde
       3       8       80        3      active sync set-B   /dev/sdf

# now the new hard disk is configured to the raid module
...
    Number   Major   Minor   RaidDevice State
       4       8       96        0      active sync set-A   /dev/sdg
       1       8       48        1      active sync set-B   /dev/sdd
       2       8       64        2      active sync set-A   /dev/sde
       3       8       80        3      active sync set-B   /dev/sdf

  • raid 5 demo (4 disks with 1 spare disk for backup)
  1. stop and reset the raid module
    refs: https://www.digitalocean.com/community/tutorials/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-16-04#:~:text=remove%20%2Fdev%2Fmd0-,Copy,as%20part%20of%20an%20array.
[root@linuxprobe ~]# umount /dev/md0
[root@linuxprobe ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0


# Once the array itself is removed, you should use mdadm --zero-superblock 
# on each of the component devices. This will erase the md superblock, a 
# header used by mdadm to assemble and manage the component devices 
# as part of an array. If this is still present, it may cause problems when 
# trying to reuse the disk for other purposes.

[root@linuxprobe ~]# mdadm --zero-superblock /dev/sdc
[root@linuxprobe ~]# mdadm --zero-superblock /dev/sdd
[root@linuxprobe ~]# mdadm --zero-superblock /dev/sde
[root@linuxprobe ~]# mdadm --zero-superblock /dev/sdf
[root@linuxprobe ~]# mdadm --zero-superblock /dev/sdg
  1. Create new raid 5
    -Cv create new md1 raid device
    -n 3 for 3 hard disks -l 5 for raid level 5 -x for one spare disk
[root@linuxprobe ~]# mdadm -Cv /dev/md1 -n 3 -l 5 -x 1 /dev/sdc /dev/sdd /dev/sde /dev/sdf
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
  1. check the new raid 5
[root@linuxprobe ~]# mdadm -D /dev/md1

...
  Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       3       8       80        -      spare   /dev/sdf
  1. format the new raid device, add path to fstab, mount it
[root@linuxprobe ~]# mkfs.ext4 /dev/md1
mke2fs 1.44.3 (10-July-2018)
/dev/md1 contains a ext4 file system
    last mounted on Tue Jan 26 23:16:31 2021
Proceed anyway? (y,N) yes
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: 935b6859-0eca-49b6-bffc-5934de777642
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@linuxprobe ~]# echo "/dev/md1 /media/raid/md1 ext4  defaults 0 0" >> /etc/fstab

[root@linuxprobe ~]# mount /dev/md1 /media/raid/md1/
[root@linuxprobe ~]# tree /media/raid/md1/
/media/raid/md1/
└── lost+found

1 directory, 0 files
  1. mimic the fault hard disk in raid5
[root@linuxprobe ~]# mdadm /dev/md1 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md1

  Number   Major   Minor   RaidDevice State
       3       8       80        0      active sync   /dev/sdf
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       0       8       32        -      faulty   /dev/sdc

# add spare new hard disk to te raid
[root@linuxprobe ~]# mdadm /dev/md1 -a /dev//sdg
mdadm: added /dev//sdg


# check the raid status again
  Number   Major   Minor   RaidDevice State
       3       8       80        0      active sync   /dev/sdf
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       0       8       32        -      faulty   /dev/sdc
       5       8       96        -      spare   /dev/sdg
  1. stop and restart raid, add a spare hard disk
umount /dev/md1
mdadm --stop /dev/md1
mdadm --assemble /dev/md1 /dev/sdd /dev/sde /dev/sdf
mdadm /dev/md1 --add /dev/sdg
  1. how to remove the raid
[root@linuxprobe ~]# umount /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -f /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -f /dev/sde
mdadm: set /dev/sde faulty in /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -f /dev/sdf
mdadm: set /dev/sdf faulty in /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -f /dev/sdg
mdadm: set /dev/sdg faulty in /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -r /dev/sdd
mdadm: hot removed /dev/sdd from /dev/md1
[root@linuxprobe ~]# mdadm /dev/md1 -r /dev/sde /dev/sdf /dev/sdg
mdadm: hot removed /dev/sde from /dev/md1
mdadm: hot removed /dev/sdf from /dev/md1
mdadm: hot removed /dev/sdg from /dev/md1
[root@linuxprobe ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 27 00:17:40 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 0
       Persistence : Superblock is persistent

       Update Time : Wed Jan 27 00:51:20 2021
             State : clean, FAILED 
    Active Devices : 0
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
[root@linuxprobe ~]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
[root@linuxprobe ~]# mdadm --remove /dev/md1
mdadm: error opening /dev/md1: No such file or directory
[root@linuxprobe ~]# mdadm --zero-superblock /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

LVM

refs: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)

In Linux, Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume.[3][4][5]

LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, repartitioning and backup.

image.png
image.png

The core of LVM is the logical volume is scalable, the capacity can be changed on-demand

Frequently used commands
use physical voluem volume group logical volume
scan pvscan vgscan lvscan
create pvcreate vgcreate lvcreate
display pvdisplay vgdisplay lvdisplay
remove pvremove vgremove lvremove
extend - vgextend lvextend
reduce - vgreduce lvreduce
  • Create physical volume
[root@linuxprobe ~]# pvcreate /dev/sdh /dev/sdi
  Physical volume "/dev/sdh" successfully created.
  Physical volume "/dev/sdi" successfully created.
  • Create volume group
[root@linuxprobe ~]# vgcreate vg00 /dev/sdh /dev/sdi
  Volume group "vg00" successfully created
  • Display and Check the volume group info
[root@linuxprobe ~]# vgdisplay 
  --- Volume group ---
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.99 GiB
  PE Size               4.00 MiB
  Total PE              2558
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2558 / 9.99 GiB
  VG UUID               t1PDF9-HG0J-1si0-7mK2-YJfz-gvZ3-tjEfFH
  • Create logical volume
[root@linuxprobe ~]# lvcreate -n lv00 -l 37 vg00
[root@linuxprobe ~]# lvdisplay
  • Format and Mount
[root@linuxprobe ~]# mkfs.ext4 /dev/vg00/lv00
[root@linuxprobe ~]# mount /dev/mapper/vg00-lv00 /media/lvm/lv00/

# write it to fstab
[root@linuxprobe ~]# echo "UUID=d586256b-80f3-4d4d-b7b1-6decbd472896 /media/lvm/lv00 ext4 defaults 0 0" >> /etc/fstab
  • Extend logical volume and Check file system integerity
[root@linuxprobe ~]# lvextend -L 290M /dev/mapper/vg00-lv00

[root@linuxprobe ~]# e2fsck -f /dev/mapper/vg00-lv00 
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg00-lv00: 12/38000 files (0.0% non-contiguous), 10455/151552 blocks
  • synchronize the extended file system to the kernel
[root@linuxprobe ~]# resize2fs /dev/mapper/vg00-lv00 
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/mapper/vg00-lv00 to 299008 (1k) blocks.
The filesystem on /dev/mapper/vg00-lv00 is now 299008 (1k) blocks long.
  • Mount again and Check the file system
mount -a
/dev/mapper/vg00-lv00  279M  2.1M  259M   1% /media/lvm/lv00
  • Reduce logical volume
umount /dev/mapper/vg00-lv00
e2fsck -f /dev/mapper/vg00-lv00
resize2fs /dev/mapper/vg00-lv00 120M
resize2fs 1.44.3 (10-July-2018)
lvreduce -L 120M /dev/mapper/vg00-lv00 
  • Mount and Check again
mount -a
df -h /dev/mapper/vg00-lv00

lvm snapshot

snapshot volume must equal the logical volume
snapshot volume will be deleted right after the recovery

  • Check the VG space
[root@linuxprobe ~]# vgdisplay vg00
  --- Volume group ---
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.99 GiB
  PE Size               4.00 MiB
  Total PE              2558
  Alloc PE / Size       30 / 120.00 MiB
  Free  PE / Size       2528 / <9.88 GiB
  VG UUID               t1PDF9-HG0J-1si0-7mK2-YJfz-gvZ3-tjEfFH
  • Crerate snapshot with -s parameter
[root@linuxprobe ~]# lvcreate -L 120M -s -n vg00-snap /dev/mapper/vg00-lv00
  Logical volume "vg00-snap" created.
  • Check the snapshot
[root@linuxprobe ~]# lvdisplay vg00/lv00
  --- Logical volume ---
  LV Path                /dev/vg00/lv00
  LV Name                lv00
  VG Name                vg00
  LV UUID                2tTh3B-dCgr-nhB9-jfDp-Shvn-ics5-sjtF0T
  LV Write Access        read/write
  LV Creation host, time linuxprobe.com, 2021-01-27 07:53:41 -0500
  ######################################################
  LV snapshot status     source of
                         vg00-snap [active]
  ######################################################
  LV Status              available
  # open                 1
  ######################################################
  LV Size                120.00 MiB
  ######################################################
  Current LE             30
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
  • Input 100M null data to lv00 and test for snap
[root@linuxprobe ~]# dd if=/dev/zero of=/dev/vg00/lv00 count=1 bs=100M
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.405763 s, 258 MB/s

lvdisplay

V Size                120.00 MiB
  Current LE             30
  COW-table size         120.00 MiB
  COW-table LE           30
  Allocated to snapshot  83.67%
  Snapshot chunk size    4.00 KiB
  Segments               1

[root@linuxprobe ~]# umount /dev/mapper/vg00-lv00 
[root@linuxprobe ~]# lvconvert --merge /dev/vg00/vg00-snap 
  Merging of volume vg00/vg00-snap started.
  vg00/lv00: Merged: 20.52%
...
  • remove logical volume
umount /dev/mapper/vg00-lv00
lvremove /dev/mapper/vg00-lv00
vgremove vg00
pvremove /dev/sdh /dev/sdi

lvdisplay
vgdisplay
pvdisplay
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容