This entry explains steps which I followed to set up storage to BackupPC software.
Obviously the right choice to set up storage settings depends on your hardware and your needs. In my case I decided to use my HP ProLiant MicroServer Gen8 G1610T as my backup server and I wanted to be safe enough to avoid data lost due to hardware failures, so I decided set up 2 internal hard disks Western Digital Red as RAID 1 device.
To prevent the mistake which I made I would recommend that you read carefully the specifications from your hardware and even make some checks with dd’s…
After installing CentOS 7 on my HP server I read the following blog entry:
The Gen8 model’s 4 bays are split — Bays 1 and 2 SATA3 6Gbps, while Bays 3 and 4 are SATA2 3Gbps.
Unfortunatelly I discovered these specifications too late when OS was already installed in /dev/sda and the two disks used to RAID device were located in second and third bay, which means different speed rates so RAID device will work to the lowest transfer speed (I guess, I am not measure I/O speed in RAID device). This is a command to check this information:
# dmesg | grep -i sata | grep 'link up' [ 1.838228] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.318265] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.794247] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 3.274246] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 3.754246] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ODD
I haven’t much free time so I decided assume this issue (and not reinstall my server), because after all is a home server, but I learnt an important lesson, you need to know perfectly well your hardware before installing anything on it. With all of this the steps I followed were these:
Selecting physical devices:
/dev/sdb /dev/sdc
Creating partitions
Let’s use fdisk to create our partitions:
# fdisk /dev/sdb WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion. Welcome to fdisk (util-linux 2.23.2). First step create partitiions: Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-3907029167, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-3907029167, default 3907029167): Using default value 3907029167 Partition 1 of type Linux and of size 1.8 TiB is set Command (m for help): t Selected partition 1 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): p Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x9f64c2f4 Device Boot Start End Blocks Id System /dev/sdb1 2048 3907029167 1953513560 fd Linux raid autodetect
The same previous step to /dev/sdc device.
Next step:
# mdadm -E /dev/sd[b-c] /dev/sdb: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sdc: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) # mdadm -E /dev/sd[b-c]1 mdadm: No md superblock detected on /dev/sdb1. mdadm: No md superblock detected on /dev/sdc1.
Create a RAID device
# mdadm -v -C /dev/md0 -n 2 /dev/sdb1 /dev/sdc1 -l 1
Check RAID device
# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Feb 29 23:57:11 2016 Raid Level : raid1 Array Size : 1953382464 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Mon Feb 29 23:58:17 2016 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Resync Status : 0% complete Name : g8.acme:0 (local to host g8.acme) UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e Events : 13 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 # mdadm -E /dev/sd[b-c]1 /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e Name : g8.acme:0 (local to host g8.acme) Creation Time : Mon Feb 29 23:57:11 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB) Array Size : 1953382464 (1862.89 GiB 2000.26 GB) Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=48 sectors State : active Device UUID : 4d9290ae:994f8d57:602be8b6:73edb241 Internal Bitmap : 8 sectors from superblock Update Time : Mon Feb 29 23:59:02 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 7fcc116a - correct Events : 22 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e Name : g8.acme:0 (local to host g8.acme) Creation Time : Mon Feb 29 23:57:11 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB) Array Size : 1953382464 (1862.89 GiB 2000.26 GB) Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=48 sectors State : active Device UUID : b0eb0220:52f87062:dcbf7a6a:75028466 Internal Bitmap : 8 sectors from superblock Update Time : Mon Feb 29 23:59:02 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : d485bd04 - correct Events : 22 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
Reviewing the RAID configuration:
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 1953382464 blocks super 1.2 [2/2] [UU] [====>................] resync = 23.3% (456830464/1953382464) finish=193.4min speed=128915K/sec bitmap: 12/15 pages [48KB], 65536KB chunk unused devices:
If you want to determine if a given device is a component device or a raid device you can execute the following commands:
# mdadm --query /dev/md0 /dev/md0: 1862.89GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail. # mdadm --query /dev/sdb1 /dev/sdb1: is not an md array /dev/sdb1: device 0 in 2 device active raid1 /dev/md0. Use mdadm --examine for more detail.
List array lines:
# mdadm --query /dev/md0 /dev/md0: 1862.89GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail. # mdadm --query /dev/sdb1 /dev/sdb1: is not an md array /dev/sdb1: device 0 in 2 device active raid1 /dev/md0. Use mdadm --examine for more detail.
Setting up LVM2
At this point I decided to set up LVM2 over the RAID device, the choice was made more for curiosity than for a technical reason.
Create Physical Volume using RAID1 array
# pvcreate /dev/md0 WARNING: ext4 signature detected on /dev/md0 at offset 1080. Wipe it? [y/n]: WARNING: ext4 signature detected on /dev/md0 at offset 1080. Wipe it? [y/n]: y Wiping ext4 signature on /dev/md0. Physical volume "/dev/md0" successfully created
Check Physical volume attributes using pvs:
# pvs PV VG Fmt Attr PSize PFree /dev/md0 lvm2 --- 1.82t 1.82t /dev/sda2 centos lvm2 a-- 424.00g 4.00m
Check Physical Volume information in detail using pvdisplay
command:
# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name centos PV Size 424.01 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 108545 Free PE 1 Allocated PE 108544 PV UUID YnaiKQ-Yz9Z-UUUN-H9aa-XLRq-AT1m-7y8wqh "/dev/md0" is a new physical volume of "1.82 TiB" --- NEW Physical volume --- PV Name /dev/md0 VG Name PV Size 1.82 TiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID mf9XlE-QDIs-7Xz3-qCHH-fXok-GclK-yLDfzR
Create a volume group
Let’s create a Volume Group (VG) named raid1 using vgcreate command:
# vgcreate raid1 /dev/md0 Volume group "raid1" successfully created
Checking VG attributes using vgs
command:
# vgs VG #PV #LV #SN Attr VSize VFree centos 1 3 0 wz--n- 424.00g 4.00m raid1 1 0 0 wz--n- 1.82t 1.82t
See VG information in detail using vgdisplay
:
# vgdisplay --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 424.00 GiB PE Size 4.00 MiB Total PE 108545 Alloc PE / Size 108544 / 424.00 GiB Free PE / Size 1 / 4.00 MiB VG UUID ZRYdVb-NmBJ-Z6Mp-NTVo-QklF-Qy7r-OCJRr2 --- Volume group --- VG Name raid1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TiB PE Size 4.00 MiB Total PE 476899 Alloc PE / Size 0 / 0 Free PE / Size 476899 / 1.82 TiB VG UUID vnJGO0-g8iT-MJTo-wMVh-Zxck-ITRh-jD26H8
Logical Volume Creation
Using lvcreate
command:
# lvcreate -L 100G raid1 -n lvm0 Logical volume "lvm0" created.
View the attributes of Logical Volume (LV):
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home centos -wi-ao---- 200.00g root centos -wi-ao---- 200.00g swap centos -wi-ao---- 24.00g lvm0 raid1 -wi-a----- 100.00g
View LV information in detail:
# lvdisplay --- Logical volume --- LV Path /dev/centos/root LV Name root VG Name centos LV UUID RtvhTy-m8Ra-xOJ2-cxPB-ruEK-4jmC-BGA8lB LV Write Access read/write LV Creation host, time localhost, 2015-04-17 18:24:27 +0200 LV Status available # open 1 LV Size 200.00 GiB Current LE 51200 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/centos/home LV Name home VG Name centos LV UUID p4ahvC-3Y0I-yblG-xzC0-6dJI-hDk3-PJuOt8 LV Write Access read/write LV Creation host, time localhost, 2015-04-17 18:24:31 +0200 LV Status available # open 1 LV Size 200.00 GiB Current LE 51200 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos/swap LV Name swap VG Name centos LV UUID EphLec-154b-jvIY-4MAf-uAnV-4XYe-GffUKQ LV Write Access read/write LV Creation host, time localhost, 2015-04-17 18:24:35 +0200 LV Status available # open 2 LV Size 24.00 GiB Current LE 6144 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/raid1/lvm0 LV Name lvm0 VG Name raid1 LV UUID xaUN5g-f4Yc-q0jV-NGJ4-lwrR-P0Bs-ZDz8fh LV Write Access read/write LV Creation host, time g8.acme, 2016-03-01 01:02:14 +0100 LV Status available # open 0 LV Size 100.00 GiB Current LE 25600 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4
Formatting partition
Format lvm partition:
# mkfs.ext4 /dev/raid1/lvm0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 6553600 inodes, 26214400 blocks 1310720 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2174746624 800 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
Create mount point
# mkdir /mnt/raid1
To mount LV:
# mount /dev/raid1/lvm0 /mnt/raid1 # lvmdiskscan /dev/loop0 [ 100.00 GiB] /dev/md0 [ 1.82 TiB] LVM physical volume /dev/centos/root [ 200.00 GiB] /dev/loop1 [ 2.00 GiB] /dev/sda1 [ 250.00 MiB] /dev/centos/swap [ 24.00 GiB] /dev/sda2 [ 424.01 GiB] LVM physical volume /dev/centos/home [ 200.00 GiB] /dev/mapper/docker-253:0-11667055-pool [ 100.00 GiB] /dev/raid1/lvm0 [ 100.00 GiB] /dev/sdd1 [ 438.50 GiB] /dev/sdd2 [ 4.02 GiB] /dev/sdd3 [ 23.07 GiB] /dev/sdd5 [ 133.32 MiB] /dev/sdd6 [ 23.50 MiB] 4 disks 9 partitions 0 LVM physical volume whole disks 2 LVM physical volumes
If you want to know about the Physical Volume (PV) in detail along with the drive participated with physical volume you can check this file /etc/lvm/lvm.conf
Edit /etc/fstab
to permanent mount:
/dev/raid1/lvm0 /mnt/raid1 ext4 defaults 0 0
Check it out
If you want to make availability tests you can manually force a fail in a physical device:
# mdadm /dev/md0 --fail /dev/sdb1 # mdadm --detail /dev/md0
Interesting entries that I found investigating for this matter:
—
“Those who can imagine anything can create the impossible.”
— Alan Turing
some truly superb blog posts on this site, regards for contribution.
Thanks a lot Emerson!
Attractive component to content. I just stumbled upon your weblog and in accession capital to assert
that I get actually loved account your weblog posts.
Any way I’ll be subscribing to your augment or even I achievement you access persistently quickly.
Thanks a lot Zack for your comment!