Raid1 with LVM from scratch
In this manual will be created RAID1 with LVM. In this tutorial, disk are mounted in system as /dev/sdX and /dev/sdY
Prerequisites
- 2 empty HDDs with same capacity
- Kernel with LVM
- lvm2
- parted
Software
Install lvm2 package
root #
emerge sys-fs/lvm2
Install parted
(parted)
emerge parted
Disk partitions create
Data on /dev/sdX and /dev/sdY will be lost. Be carefull with disk names
Create partitions on both disks with parted
Start parted for /dev/sdX disk
root #
parted -a optimal /dev/sdX
Set units to mib
(parted)
unit mib
Create GPT table on disk
(parted)
mklabel gpt
Create primary partition, use all available space
(parted)
mkpart primary 1 -1
Set partition name to raiddata0
(parted)
name 1 raiddata0
Add lvm flag to new partition
(parted)
set 1 lvm on
Result should be:
(parted)
print
Model: ATA ST6000VN0033-2EE (scsi) Disk /dev/sdc: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 6001GB 6001GB raiddata0 lvm
Execute same parted commands for /dev/sdY
LVM
Next steps will be to create physical volumes on both disks, add both physical volume to same volume group and create logical volume with raid1 logic
Physical volume
Create physical LVM volumes on first disk on first partition
root #
lvm pvcreate /dev/sdX1
Create physical LVM volumes on second disk on first partition
root #
lvm pvcreate /dev/sdY1
Volume group
Include both physical volumes to one volume group with name *raid0vg0*
root #
vgcreate raid0vg0 /dev/sdX1 /dev/sdY1
Now both disks in same volume group.
Logical Volume
Create logical volume with name *raid0lv0* on volume group *raid0vg0* with RAID1 logic, use all available space. --nosync means skip the initial synchronization for raid1 (because this is new raid without any data on it)
root #
lvcreate --mirrors 1 --type raid1 -l 100%FREE --nosync -n raid0lv0 raid0vg0
Now raid1 created on both disks /dev/sdX and /dev/sdY . Last step left - create FS and mount this FS on boot. see section below
EXT4 Filesystem (non encrypted)
Create filesystem on VolumeGroup *raid0vg0* on logical volume r*aid0lv0*
root #
mkfs.ext4 /dev/raid0vg0/raid0lv0
Done
Please, do not forget to add lvm2 service at boot: rc-update add lvm2 boot
Your kernel should include LVM modules in initrd on compiled into kernel. See
Mount filesystem on boot
Run blkid to find UUID of ext4 filesystem on our LVM raid1
root #
blkid
... /dev/mapper/raid0vg0-raid0lv0_rimage_0: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0_rimage_1: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" ...
- UUID="10092fa9-43f5-421e-a0a1-ca96323c6388"* is id of our ext4 filesystem on raid1. Last thing is to add fs uuid to fstab
Create mountpoint /mnt/data
root #
mkdir /mnt/data
Add to fstab mounting
/etc/fstab
<syntaxhighlight lang="bash">... UUID=10092fa9-43f5-421e-a0a1-ca96323c6388 /mnt/data ext4 defaults 0 2 ...</syntaxhighlight>
EXT4 Filesystem (encrypted with LUKS)
Create luks AES encrypted partition on top of Volume Group *raid0vg0* in Logical Volume *raid0lv0* (raid1)
Please, see Full Disk Encryption From Scratch Simplified
root #
cryptsetup luksFormat -c aes-xts-plain64:sha256 -s 256 /dev/raid0vg0/raid0lv0
Map encrypted luks disk as *raid0lv0encripted*
root #
cryptsetup luksOpen /dev/raid0vg0/raid0lv0 raid0lv0encripted
Create EXT4 filesystem in luks disk
root #
mkfs.ext4 /dev/mapped/raid0lv0encripted
Mount LUKS encrypted device on boot from LVM raid1
First, create directrory, that will contain keys for encrypting/decryption devices
root@localhost:/#
mkdir /etc/keyfiles
root@localhost:/#
chmod 0400 /etc/keyfiles
Create 4k keyfile with name main
root@localhost:/#
dd if=/dev/urandom of=/etc/keyfiles/main bs=1024 count=4
root #
chmod 0400 /etc/keyfiles/main
Add main keyfile to list of keys, that can decrypt disk (technically: add keyfile to LUKS slot)
root #
cryptsetup luksAddKey /dev/raid0vg0/raid0lv0 /etc/keyfiles/main
Find id of LUKS device (should be same as LV raid0lv0)
root #
blkid
/dev/sdc1: UUID="OxJaqA-yMAP-sOjE-T5BR-H9Lp-rtPN-pl7rFC" TYPE="LVM2_member" PARTLABEL="raiddata1" PARTUUID="9c794e91-22a8-4b58-bedd-c3f656d82bd9" /dev/sdb1: UUID="gNcHvg-Rocv-pFFc-VzvF-49tX-D1d3-odSe2h" TYPE="LVM2_member" PARTLABEL="raiddata0" PARTUUID="70121885-4a45-4a2b-8d3e-49edd8fffd34" /dev/mapper/raid0vg0-raid0lv0_rimage_0: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0_rimage_1: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0: UUID="cd5740a1-b642-4359-a0b9-af84a8f01092" TYPE="crypto_LUKS" /dev/mapper/raid0lv0encripted: UUID="fc7ec587-35e4-4726-815d-e1693cd89b70" TYPE="ext4"
In our case it is UUID="cd5740a1-b642-4359-a0b9-af84a8f01092"
Add to file /etc/conf.d/dmcrypt
/etc/conf.d/dmcrypt
target='raid0lv0encripted' source=UUID='cd5740a1-b642-4359-a0b9-af84a8f01092' key='/etc/keyfiles/main'
Add dmcrypt to be started at boot
root #
rc-update add dmcrypt boot
Create mountpoint /mnt/data
root #
mkdir /mnt/data
Find EXT4 filesystem UUID
root #
blkid
/dev/sdc1: UUID="OxJaqA-yMAP-sOjE-T5BR-H9Lp-rtPN-pl7rFC" TYPE="LVM2_member" PARTLABEL="raiddata1" PARTUUID="9c794e91-22a8-4b58-bedd-c3f656d82bd9" /dev/sdb1: UUID="gNcHvg-Rocv-pFFc-VzvF-49tX-D1d3-odSe2h" TYPE="LVM2_member" PARTLABEL="raiddata0" PARTUUID="70121885-4a45-4a2b-8d3e-49edd8fffd34" /dev/mapper/raid0vg0-raid0lv0_rimage_0: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0_rimage_1: UUID="10092fa9-43f5-421e-a0a1-ca96323c6388" TYPE="ext4" /dev/mapper/raid0vg0-raid0lv0: UUID="cd5740a1-b642-4359-a0b9-af84a8f01092" TYPE="crypto_LUKS" /dev/mapper/raid0lv0encripted: UUID="fc7ec587-35e4-4726-815d-e1693cd89b70" TYPE="ext4"
In our case it is UUID="fc7ec587-35e4-4726-815d-e1693cd89b70"
Add to fstab mounting
/etc/fstab
<syntaxhighlight lang="bash">... UUID=fc7ec587-35e4-4726-815d-e1693cd89b70 /mnt/data ext4 defaults 0 2 ...</syntaxhighlight>
Check LVM RAID1 status
To check lvm raid status for volume group raid0vg0
root #
lvs -a -o name,copy_percent,devices raid0vg0
LV Cpy%Sync Devices raid0lv0 100.00 raid0lv0_rimage_0(0),raid0lv0_rimage_1(0) [raid0lv0_rimage_0] /dev/sdc1(1) [raid0lv0_rimage_1] /dev/sdb1(1) [raid0lv0_rmeta_0] /dev/sdc1(0) [raid0lv0_rmeta_1] /dev/sdb1(0)
Performance tunnig
It is possible to tunne performance for raid1. By default, in raid1 all disk have same rangs for read/write. If one of disks are much slower then another, it possible to improve WRITE performance (with small penalty of read performance) for such scenario with disabling read from slowest drive.
In such scenario, slowest drive will only write data (without reading), and faster drive will read/write data.
root #
lvchange --raidwritemostly /dev/sdb1 raid0vg0
Logical volume raid0vg0/raid0lv0 changed.
Where /dev/sdb1 - physical drive in vg0 (slowest drive), raid0vg0 - volume group
See also
- LVM — allows administrators to create meta devices that provide an abstraction layer between a file system and the physical storage that is used underneath.
- Full Disk Encryption From Scratch Simplified
External resources
- https://blog.programster.org/create-raid-with-lvm — Another example of create raid1 on lvm