Wednesday, 17 October 2018

Expandable Disk Server

In my SOHO I have tried many types of storage solutions. Currently the system is a Qnap, four bay system. It's turn key, user friendly approach does make it easy to setup and use, but it's highly limited with not having a fifth drive bay. Four bays allows for two disks in RAID-1 or one big disk in RAID-10, not having a fifth bay prevents upgrading to larger disks, without fronting the cash to buy an entire new system and disks all at once.

It's time for a storage solution that can grow over time, not only to replace smaller disk but to allow for the system to start with just one disk and add to the cluster over the next few month.

Looking at using FreeNAS to manage a NAS server built from a Linux box. This will not be a dedicated box, so I will be building a Proxmox and hosting the FreeNAS as a VM inside the Proxmox. The FreeNAS needs to grow as new disks are added so it will be using the disks as an NFS share from Proxmox.

The plan is to use Wester Digital 12TB disks, as these disks are almost $700 each, only one disk will be added to the system every two weeks over the next two months.

Building the disk array will use Linux software RAID-1, LVM, and XFS.

RAID-1
This level of RAID is for redundancy, all disks in the RAID are kept identical. When one disk fails the file system continues to work.

LVM
LVM visualises the block layer of block devices. This is a powerful tool for manipulating block devices on a running system. LVM does have builtin support for mirroring and RAID, including RAID-1 and RAID-10, but happens at the logical volume layer and using these would prevent the migration to larger disks.

XFS
This file system allows for growing live without the need to format or reboot the system.

To test this plan, and demonstrate how this will work, VMware running Fedora 27 was used. The images used a 8GB SCSI for the root file system. The disks in the cluster are SATA and are marked a through f.

In this example the commands used create Physical Volumes with names that match the RAID-1 device, while the images show the Physical Volumes with PV names.

Step 1 Create a new RAID-1
mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sda missing




Step 2 Create a new physical volume
pvcreate /dev/md0

Step 3 Create a new volume group
vgcreate vg0 /dev/md0

Step 4 Create a new logical volume
lvcreate -l 100%VG -n lv0 vg0

Step 5 Create the XFS
mkfs.xfs /dev/vg0/lv0
mkdir /xfs0
mount /dev/vg0/lv0 /xfs0/

Step 6 Add another disk to the RAID
mdadm --add /dev/md0 /dev/sdb

Step 7 Create a new RAID-1
mdadm --create /dev/md1 --run --level=1 --raid-devices=2 /dev/sdc missing

Step 8 Create a new physical volume
pvcreate /dev/md1

Step 9 Grow the existing volume group
vgextend vg0 /dev/md1

Step 10 Grow the existing logical volume
lvextend -l 100%VG vg0/lv0

Step 11 Grow the existing XFS live
xfs_growfs /xfs0/
Add the last disk the same way as Step 6.



Expand the disk cluster by adding new PVs and migrating

Step 12 Create the new RAID and PV
mdadm --create /dev/md2 --run --level=1 --raid-devices=2 /dev/sd/ missing
pvcreate /dev/md2

Step 13 Extend the volume group to include the new PV
vgextend vg0 /dev/md2

Step 14 Move all blocks from the old PV to the new PV.
pvmove /dev/md0 /dev/md2

Step 15 Remove the first PV.
vgreduce vg0 /dev/md0

Step 16 Grow the logial volume again
lvextend -l 100%VG vg0/lv0

Step 17 Re-grow the existing XFS live
xfs_growfs /xfs0/

Saturday, 6 October 2018

Now on GitLab.com

You can now download all the code from GitLab.com

git clone git@gitlab.com:SiliconTao-open-source/linux-code-snippets.git

Host your projects on GitLab.com