Another post motivated largely by my recognising that I never seem to document my projects well enough to fix them if they break. Perhaps this might show up in a search result somewhere and help someone in the future…but be warned that I don’t fully understand the concepts at work here, so consult some other sources too! I linked the sites I used at the bottom.
I recently upgraded my Proxmox host and internal drive. The install was all fairly straightforward, but I struggled to fully understand how Proxmox deals with storage. I found that the ’local’ section of the disk where the iso’s and backups were stored was only about 100gb. I realised from reading around that Proxmox doesn’t seem to like being installed on only one drive and instead inspects some kind of RAID array. This was a problem for me as my Proxmox host is an intel NUC…I had to break it all down and create the storage volumes manually.
Creating a lvm-thin volume for the vms to run
First I created the volume the actual VM’s would run on. I allocated 700gb of my 1tb drive for this - I don’t really run that many containers etc, but I have a couple of Windows hosts I use very occasionally that take up a whole lot more space. First check the volumes you have:
$ lvdisplay
I deleted basically everything except /dev/pve/root
and /dev/pve/swap
. You can use the $ lvremove
command for this - obviously think carefully before deleting things!
I then created by volume and pool for the VM’s to run on with:
$ lvcreate -L 700G -n data pve
$ lvconvert --type thin-pool pve/data
You can see I allocated 700Gb to the pool. Should be enough for a couple of Windows VMs and a whole host of LXC containers. The pool is called “data,” but you can call it whatever. Check you created your volume with:
$ pvesm lvmthinscan pve
At this point it’s about 50% done!
Resizing the lvm-thin volume
Should you realise in the future that you should have allocated more of your disk to the volume, it is possible to expand it with
$ lvextend(or lvextend) --size +100G pve/data
$ lvresize --poolmetadatasize +100G pve/data
I think the difference between lvextend
and lvresize
is that resize can shrink as well as grow the volume. I can see that shrinking might cause serious damage if you’re trying to shrink the allocation to a smaller size than the actual data that exists, so it’s probably good to be cautious here! I think lvextend
simply grows the allocation, so maybe just use that if that’s all you wish to do.
Creating a directory volume for backups, images & snapshots
It’s all well and good having lots of space for VMs and containers, but you need a place for iso’s, snapshots and backups! In this case I think Proxmox really does expect more disks - backing up VMs to the same disk they’re already running on is a bit redundant! In my case I back them up and run a trigger script to rsync the snapshots to a folder on my NAS (which in turn gets backed up to my backup server.) I suppose I could mount the folder in Proxmox via NFS or something, but this way always worked better for me.
I created a smaller (200gb) volume than before called “backups”. Again, you can call it whatever you want:
$ lvcreate -V 200G -T pve/data -n backups
I then created a filesystem on it (Ext4 - I’m sure I should look into ZFS and using a more modern filesystem…but honestly I wasn’t bothered) and mounted it:
$ mkfs.ext4 /dev/pve/backups
$ mount /dev/pve/backups /var/lib/vz
At this point it should all be working more or less…but won’t survive a reboot! The changes need to be written to the Proxmox fstab file so they will load each time:
$ nano /etc/fstab
The file should look something like:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/backups /var/lib/vz ext4 nofail 0 0
UUID=114D-C06F /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
It’s important to note here that the backups parition was mounted at /var/lib/vz
which, I think, is the default Proxmox location for it. This is where all the backup files go, as well as any iso’s etc.
A reboot later and I now have my local
and local-lvm
storage ready to rock.
I’m sure I’ll write up my backup solution with the hook script I use at some point.