Concept Link to heading

A virtualization server with expandable pooled storage. Who wouldn’t want that?

Background Link to heading

My goal was to build a server to host my services that could continue to grow with my needs over time.

Hypervisor

A hypervisor as the base OS allows me to separate my services for stability, add/remove servers and more easily use automation tools like Ansible to manage my lab’s infrastructure. I chose Proxmox because it was open-source, stable and allowed for the creation of both VMs and containers.

Pooled Storage & Backup

I wanted to take a bunch of large drives, make them work together as one, accessible from each VM and have a RAID-like backup setup.

I initially planned to use zfs but I found it didn’t quite meet my needs. zfs has trouble dealing with different sized drives as well as large drive sizes, makes it difficult to add more drives to a pool and, while open-source, is released under a non-GNU license by Oracle. In the end, a mix of MergerFS, SnapRAID and NFS was exactly what I was looking for.

MergerFS is a “Union Filesystem” that pools together multiple drives into one virtual drive.

NFS allows me to make the MergerFS pool accessible to each virtual machine.

SnapRAID is a backup program for disk arrays. Disk array backups work by using a parity drive to store the information of the other drives so that if a one dies, recovery is possible using the information from the parity. SnapRAID doesn’t work at a hardware level like RAID, but at the software level and only syncs the parity when told to.

Result:

  • MergerFS has no issues with large and different sized drives.
  • Adding new drives to the MergerFS pool is as simple as adding a line to the config file.
  • Each drive still contains the data in readable format since the drives are being pooled at a software level. This means you can remove a drive from the array without affecting the array and still be able access data from the drive.
  • SnapRAID provides a flexible backup solution to recover failed drives from the last sync.

Requirements Link to heading

I will only be covering the software setup once you have your server built. So essentially the opposite of “Draw the rest of the fucking owl”.

In regards to choosing the hardware, here are some notes:

  • Motherboard: Get a server grade motherboard. Make sure its compatible with your CPU, RAM, GPU, etc… If you don’t want a GPU, make sure it has onboard graphics capability.
  • CPU: Get a server grade cpu as well (i.e. Xeon for Intel). VT-x and VT-d enabled for virtualization and make sure you have enough cores for your needs. If you don’t want a GPU, make sure it also has onboard graphics capability.
  • RAM: Get as much as possible. ECC for mission critical data.
  • HDD: Your parity drive must be equal to or larger than the largest drive in the pool! I get WD 10TB and 12TB easystore deals (usually whites) and shuck them. You may need to put tape on the pins if it doesn’t work at first.
  • SSD: I recommend getting one for your proxmox boot disk and VM operating systems.

Install Link to heading

Once you build your server, install proxmox on the machine following these instructions.

You will use a usb installer to install proxmox on a specified drive.

I would not recommend using a usb as in my experience the read/writes will wear it down within a year and it’s more likely to corrupt during a power surge. I would recommend installing it on an ssd using less than 64gb of its storage (hdsize in advanced settings during install) and partitioning the rest for vm images, or using a dedicated ssd or hard drive.

Network Management Configuration Link to heading
  • Management Interface: Your ethernet port, most likely eno1
  • Hostname: proxmox.<any_fake_domain>.local
  • IP Address: Desired server IP address
  • Netmask: Most likely 255.255.255.0
  • Gateway: Your network gateway
  • DNS Server: 8.8.8.8 (I use google)

Setup Link to heading

Initial proxmox setup Link to heading

Connect to your proxmox VM either with ssh or through the console in the proxmox web interface.

Disable subscription popup on web interface:

sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service

Add line below to /etc/apt/sources.list to add the no subscription repository:

deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

Comment out line in /etc/apt/sources.list.d/pve-enterprise.list to disable subscrition sources

Update the OS Link to heading
apt-get update
apt-get upgrade
shutdown -r 0
Install basic applications Link to heading
apt install vim gcc curl make bash-completion atop htop tmux network-manager git neofetch fail2ban parted -y
  • tmux is a terminal multiplexer
  • fail2ban prevents bruteforcing attempts on your server by blacklisting IPs with multiple failed password attempts
Custom MOTD Link to heading

Edit the login message in /etc/update-motd.d/10-uame and replace the uncommented line with neofetch. This will pull up your system info with neofetch on login.

Then remove the other motd file:

rm /etc/motd
Setup ssh Link to heading
apt install openssh-server
ssh-keygen

Edit /etc/ssh/sshd_config and set:

  • AuthorizedKeysFile .ssh/authorized_keys
  • PasswordAuthentication yes

You can then add your ssh public key(s) to ~/.ssh/authorized_keys

Plan your pool Link to heading

List your drives with lsblk and blkid.

I would recommend creating a table like the one below to plan and document your setup:

Size UUID Type Partition Location File Type Notes
/dev/sdb1 /mnt/parity ext4
/dev/sdd1 /mnt/disk1 ext4
/dev/sda1 /mnt/disk2 ext4

Important: SnapRAID requires your parity drive to be equal to or larger than the largest drive in your pool

Check disk health Link to heading

Enable S.M.A.R.T. on each drive replacing X with drive letter:

smartctl --smart=on /dev/sdX

Check disk health:

smartctl -x /dev/sdX
Wipe and partition drives Link to heading

Install fdisk:

apt-get install fdisk -y

Run fdisk on the relevant drive replacing X with the drive letter:

fdisk /dev/sdX
  • Delete partitions: d

  • Create new partition table: g

  • Create new partition with default settings: n

  • Check the partition table before saving: p

  • Write to disk (This is the point of no return): w

Create the ext4 file system replacing X with the drive letter:

mkfs.ext4 /dev/sdX1

Do this for each of your drives.

Create mount directories Link to heading

Create the directories we will mount our disks to:

mkdir /mnt/pool
mkdir /mnt/parity
mkdir /mnt/disk1
mkdir /mnt/disk2
mkdir /mnt/disk3

etc…

Setup MergerFS Link to heading

Install MergerFS:

apt install fuse mergerfs xattr python-xattr -y
git clone https://github.com/trapexit/mergerfs-tools.git
cd mergerfs-tools
make install

The fstab config file is used to automatically mount your drives and pool during startup.

Add the lines below to /etc/fstab, edited for your setup:

UUID="insert-UUID-here" /mnt/parity ext4 nofail 0 0
UUID="insert-UUID-here" /mnt/disk1 ext4 nofail 0 0
UUID="insert-UUID-here" /mnt/disk2 ext4 nofail 0 0
UUID="insert-UUID-here" /mnt/disk3 ext4 nofail 0 0

/mnt/disk2:/mnt/disk3:/mnt/disk4:/mnt/disk5 /mnt/pool fuse.mergerfs defaults,allow_other,use_ino,noforget,hard_remove,moveonenospc=true,minfreespace=20G 0 0
  • Each drive is mounted to the relevant directory we created in /mnt/. UUID is used instead of /dev/sdX because its static.
  • The last line creates the pool. The drives listed at the beginning are added to the pool (/mnt/disk1:/mnt/disk2:/mnt/disk3 etc..) The parity drive is not part of the pool.
  • The next part determines the location of the pool. I have it set as /mnt/pool.
  • The rest of the last line sets the configuration of the pool.

Restart and verify the pool configuration:

sudo shutdown -r now
cd /mnt/pool
xattr -l .mergerfs
Setup Snapraid Link to heading

Go to https://www.snapraid.it/download and copy the download url of the latest version.

cd /opt
wget <url_of_latest_snapraid_version>
tar xzvf snapraid-<version>.tar.gz
cd snapraid-<version>
./configure
make
make check
make install
cd ..
cp snapraid-<version>/snapraid.conf.example /etc/snapraid.conf

Add the lines below to /etc/snapraid.conf edited for your setup:

parity /mnt/parity/snapraid.parity
content /mnt/disk1/snapraid.content
content /mnt/disk2/snapraid.content
content /mnt/disk3/snapraid.content
data d1 /mnt/disk1/
data d2 /mnt/disk2/
data d3 /mnt/disk3/
exclue *.qcow2

Remember, your parity must be larger or equal to the the largest drive in your pool.

Setup NFS Link to heading

Install NFS Server:

apt install nfs-kernel-server

Instead of sharing the entire pool, I created a directory for the files I plan to share between my VMs called archive. This allows me to use my pool for other things that I don’t need shared.

mkdir /mnt/pool/archive

Add line below to /etc/exports:

/mnt/pool/archive *(rw,sync,fsid=0,no_root_squash,no_subtree_check)

Export the NFS file system:

exportfs -arvf

Mount the pool:

systemctl stop nfs-server
umount /mnt/pool
mount -a
systemctl start nfs-server
Install netdata Link to heading

This is an optional step to setup Netdata, a lightweight realtime performance monitor.

apt-get update
apt install curl
bash <(curl -Ss https://my-netdata.io/kickstart.sh)

Navigate to http://proxmox_ip:19999/ in your browser to view the Netdata dashboard.

Tinkering Link to heading

After experincing issues with swap usage, I increased the “swapiness” of the system. Low swappiness means more use of ram availability as less is sent to swap.

Edit /etc/sysctl.conf and add vm.swappiness=10.

Storage Setup Link to heading

In the Proxmox web interface, at http://proxmox_ip:8006/, go to Datacenter > Storage.

Click Add > Directory and create each of these storage directories:

  • id=vm dir=/mnt/disk1 content=disk image
  • id=images dir=/mnt/pool/images content=iso,container,container temp
  • id=local dir=/var/lib/vz content=vzdump backup

VM is for virtual machines on the ssd (disk1 for me). Images are for installation isos. Local is for backups on the proxmox OS, I disable everything other than vzdump to not accidentally fill up the proxmox drive.

Conclusion Link to heading

That’s it! You now have:

  • A virtualization server using Proxmox as the hypervisor
  • Your drives mounted in a pool at /mount/pool using MergerFS
  • The ability to mount your pool on your VMs with NFS
  • The ability to restore drives from your last sync with SnapRAID
  • System monitoring with Netdata

 


Mounting Pool to VMs with NFS Link to heading

After creating your VMs in Proxmox, you will need to mount your pool via NFS:

Note: These commands are run on the VM OS. Make sure to chage <proxmox_ip_address> to the relevant IP and feel free to change /media/storage to a location of your choosing.

sudo apt-get install nfs-common portmap -y
sudo mkdir /media/storage
sudo chmod 755 /media/storage
sudo mount <proxmox_ip_adress>:/mnt/pool/storage /media/storage

Add below to /etc/fstab to automatically mount drives on startup:

<proxmox_ip_address>:/mnt/pool/storage /media/storage nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

Restart and set permissions:

sudo shutdown -r now
cd /media/storage
sudo chmod -R 755 .

Adding New Drives Link to heading

Following the instructions above:

  1. Check disk health
  2. Wipe and partition drives
  3. Create mount directories
  4. Mount the drives and add them to MergerFS pool in /etc/fstab
  5. Add drives to snapraid in /etc/snapraid.conf
  6. Restart
  7. Create directories in drive so mergerfs will know it can write to it: mkdir /mnt/diskX/storage

Using Mergerfs Link to heading

  • Verify pool configuration: xattr -l .mergerfs in the pool directory
  • Move directory to least filled drive: mergerfs.balance /mnt/pool/dir
  • Consolidate directory into a single drive: mergerfs.consolidate /mnt/pool/dir

Using SnapRAID Link to heading

  • Sync: snapraid sync
  • Check the array for issues: snapraid check
  • Check drive status with SMART: snapraid smart

Feel free to stop snapraid sync (ctrl+c) at any time as it will save it’s progress

Mounting and unmounting a USB Link to heading

mount /dev/sdx1 /location/to/mount/to
umount /location/of/mount

Mounting a USB to VM Link to heading

These commands are run from the Proxmox host

qm monitor
qm> info usbhost
#copy id of usb (0000:ffff)
qm set <vm_id> -usb<number> 0000:ffff

Recovery Link to heading

If your proxmox boot disk dies or corrupts, DON’T PANIC.

DONT PANIC

None of your files are on on that partition and everything can be easily reinstalled and reconfigured. Simply use the usb installer to install proxmox on a new drive and follow the steps above to reinstall the OS and programs.

The VM configuration files are located at /etc/pve/qemu-server/<VMID>.conf, I would recommend backing these up. They just need to be added to that directory in the new OS and, once the original vm storage is added in the web interface, you will be able to start the vms as if nothing had happened.

If you haven’t backed up those config files however, you can recreate them in the web interface and link them to the relevant directory like so:

  1. First create the vm in the web interface like you would normally, with as close to the same settings as possible, but with a different ID than the original vm.
  2. Then navigate to /etc/pve/qemu-server/, copy the config file with the new ID and rename it to the old ID: cp <old_id>.conf <new_id>.conf
  3. Edit the new file and replace the directory after scsi0: with the location of the .qcow2 images in the vm storage directory.
  4. Make sure the vm storage directory is added in Database > Storage on the proxmox web interface.
  5. And Voila! You can now start the vms from the web interface.