I recently setup a cluster of Raspberry PIs and Intel NUCs as a shared Kubernetes cluster for my homelab. Each device has a primary NVMe drive with 500GB of storage and the NUCs have a secondary 2TB SATA SSD. Everything went smoothly so I configured Longhorn as a way to share persistent volumes across the cluster and provide redundancy if any machine or drive were to fail. When logging into the Longhorn UI I noticed that one of the machines only had 100GB of storage usable on the entire machine, which was definitely incorrect.
Investigating from inside the host I was able to determine this was an OS issue and not a problem with the Longhorn configuration.
lsblk
# nvme0n1 259:0 0 476.9G 0 disk
# ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
# ├─nvme0n1p2 259:2 0 2G 0 part /boot
# └─nvme0n1p3 259:3 0 100.0G 0 part
# └─dm_crypt-0 252:0 0 100.0G 0 crypt
# └─ubuntu--vg-ubuntu--lv 252:1 0 100.0G 0 lvm /
First we need to fix the partition since it's only using 100GB of the 476.9GB available on the drive. This can happen for various reasons but in my case it's because I started with a smaller disk in Proxmox then increased the physical volume.
sudo parted /dev/nvme0n1
(parted) resizepart 3 100%
(parted) quit
In this case there is LUKs encryption is configured so that needs expanding as well. You will be prompted for the passphrase you use to decrypt your volume on boot.
sudo cryptsetup resize dm_crypt-0
sudo pvresize /dev/mapper/dm_crypt-0
Running lsblk
again should now show that most things line up correctly, but the logical volume is still only 100GB.
lsblk
# nvme0n1 259:0 0 476.9G 0 disk
# ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
# ├─nvme0n1p2 259:2 0 2G 0 part /boot
# └─nvme0n1p3 259:3 0 473.9G 0 part
# └─dm_crypt-0 252:0 0 473.9G 0 crypt
# └─ubuntu--vg-ubuntu--lv 252:1 0 100.0G 0 lvm /
It turns out that I must have selected an incorrect value when installing Ubuntu on this device. The default values were used and each device has the same base drive and configuration. Luckily there's a quick and easy way to fix this on linux which I found on Ask Ubuntu.
The original answer (and article) contains a full explanation of why this happens and exactly what the fix does, but if you're just interested in fixing your own drive you can jump straight to the commands below.
sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
# Size of logical volume ubuntu-vg/ubuntu-lv changed from 100.00 GiB (25600 extents) to 473.87 GiB (121311 extents).
# Logical volume ubuntu-vg/ubuntu-lv successfully resized.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
# resize2fs 1.47.0 (5-Feb-2023)
# Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
# old_desc_blocks = 13, new_desc_blocks = 60
# The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 124222464 (4k) blocks long
lsblk
# nvme0n1 259:0 0 476.9G 0 disk
# ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
# ├─nvme0n1p2 259:2 0 2G 0 part /boot
# └─nvme0n1p3 259:3 0 473.9G 0 part
# └─dm_crypt-0 252:0 0 473.9G 0 crypt
# └─ubuntu--vg-ubuntu--lv 252:1 0 473.9G 0 lvm /
I'm posting this mainly so I can quickly reference it if I ever have this issue again but hopefully it helps someone else in the same weird position after setting up a new machine.