BLU Discuss list archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Discuss] Debian 11 -> 12
- Subject: [Discuss] Debian 11 -> 12
- From: richard.pieri at gmail.com (Rich Pieri)
- Date: Wed, 22 May 2024 16:01:33 -0400
- In-reply-to: <20240522150727.16e8dc9a@mydesk.domain.cxm>
- References: <4ce2088f-6eef-414d-9c6a-e4ae3aae005f@borg.org> <01ada0c6-0b70-414a-bf6e-8a0005529203@borg.org> <20240519182325.62e7b5d3.Richard.Pieri@gmail.com> <ec3822a4-baeb-42cd-8092-2344a1d5f514@borg.org> <20240520141310.48334308.Richard.Pieri@gmail.com> <20240522150727.16e8dc9a@mydesk.domain.cxm>
On Wed, 22 May 2024 15:07:27 -0400 Steve Litt <slitt at troubleshooters.com> wrote: > Unless you're encrypting the root partition, I can't think of any use > of LVM that can't be done other ways. I view LVM as yet another layer > of abstraction and yet another way to lose your data. My most common use case at work: Add vDisk to VM in vCenter. Create partition on vDisk with gdisk. pvcreate /dev/${DEV}1 vgextend ${VG} /dev/${DEV}1 lvextend -r -l 100%VG /dev/mapper/${LV} Where VG = volume group and LV = logical volume. We're running with large monolithic database files. Bind mounts cannot expand the filesystem where these files are stored. The filesystem itself has to be extended. LVM makes this possible on live systems. And it took me longer to type this out than to actually do it on a live VM. I could do this without LVM, and I have done so on VMs using basic partitions: shut down the VM, increase the size of the vDisk, boot a GNU Parted Live image, extend the partition to the new vDisk size and extend the filesystem, shut down the VM, and then boot normally. A much less common use but something we do on our product soak testing machines with pools of NVMe storage is to use LVM to stripe across the volume group: lvcreate -i $N -I $X -n $VG where N = number of stripe devices, X = stripe size (typically 128KB) Simpler and less overhead than using mdadm where we don't need device redundancy but we do want to distribute writes across the entire pool to balance wear. Simple concatenation or bind mounts would cluster writes to one device leading to it aging faster than the other devices in the pool. Plus we get ludicrous I/O performance which is beneficial for soak testing. I have seen filesystems (notably XFS and early ext4) lose or damage data. I have seen the VFS layer damage data. But not LVM. Maybe it has happened to others. But in 25+ years and somewhere on the order of three thousand Linux machines that I have previously managed and currently manage as part of my job or used/use personally, I have never seen LVM lose or damage data. -- \m/ (--) \m/
- Follow-Ups:
- [Discuss] Debian 11 -> 12
- From: slitt at troubleshooters.com (Steve Litt)
- [Discuss] Debian 11 -> 12
- References:
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: richard.pieri at gmail.com (Rich Pieri)
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: richard.pieri at gmail.com (Rich Pieri)
- [Discuss] Debian 11 -> 12
- From: slitt at troubleshooters.com (Steve Litt)
- [Discuss] Debian 11 -> 12
- Prev by Date: [Discuss] Debian 11 -> 12
- Next by Date: [Discuss] Debian 11 -> 12
- Previous by thread: [Discuss] Debian 11 -> 12
- Next by thread: [Discuss] Debian 11 -> 12
- Index(es):