BLU Discuss list archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Discuss] Debian 11 -> 12
- Subject: [Discuss] Debian 11 -> 12
- From: markw at mohawksoft.com (markw at mohawksoft.com)
- Date: Thu, 30 May 2024 14:25:16 -0400
- In-reply-to: <5a1522b2-3ae4-4496-9b58-0ee9475377c6@borg.org>
- References: <4ce2088f-6eef-414d-9c6a-e4ae3aae005f@borg.org> <01ada0c6-0b70-414a-bf6e-8a0005529203@borg.org> <20240519182325.62e7b5d3.Richard.Pieri@gmail.com> <ec3822a4-baeb-42cd-8092-2344a1d5f514@borg.org> <20240520141310.48334308.Richard.Pieri@gmail.com> <20240522150727.16e8dc9a@mydesk.domain.cxm> <20240530154752.GC15543@bladeshadow.org> <18475cc5c596826d1b0461b3f2f9f7b4.squirrel@mail.mohawksoft.com> <5a1522b2-3ae4-4496-9b58-0ee9475377c6@borg.org>
> On 5/30/24 09:47, markw at mohawksoft.com wrote: >> All that said, OMG ZFS is absolutely the way to go for any new >> deployment >> unless a bare bones hardware performance is required. > > I would amend that: Any new deployment???that is conventional (from ZFS's > perspective) and can afford the necessary expertise. I don't understand why you think ZFS has any more base complexity than something like LVM. create -f snoopyz raidz /dev/disk/by-id/wwn-0x50014ee20911cd02 /dev/disk/by-id/wwn-0x50014ee003e1e0da /dev/disk/by-id/wwn-0x50014ee20be68e9f /dev/disk/by-id/wwn-0x50014ee20c8617e0 (I use disk/by-id because it doesn't break when drives are reordered, but you could easily use /dev/sdb /dev/sdc etc.) There done. You have a pool. What's hard about that? zfs create -V 200G snoopyz/qemu_vm_image Is that harder than LVM's lvcreate? or zfs create -V 200G -s snoopyz/qemu_vm_image The "-s" will give you a 200G /dev/zd[n] device that is "sparse" or "thin provisioned" which means it won't use space until it is written. You could allocate a block device that is bigger than the amount of disk you have. Back-fill it as it grows. zfs set compression=lz4 snoopyz That will enable compression on the pool. zfs set compression=lz4 snoopyz/qemu_vm_image > > I have played with a lot of software over the years, and when I tried > ZFS, I got it to work on my Intel laptop. Though personally, as a matter > of taste, I found it ornery. And it flat out *crashed* when I tried to > do the same stuff on a Raspberry PI 4. I was certainly doing unusual > things, if nothing else running ZFS on a Raspberry PI 4 is apparently > weird. But I still don't expect mainstream software to crash in my face, > and certainly not software that I am supposed to trust my data to. Dude, I had ZFS running on a RPI4 and just upgraded it to an RPI5. Zero issues. markw at raspberrypi:~ $ zpool status pool: backupz state: ONLINE scan: scrub repaired 0B in 04:25:16 with 0 errors on Tue May 28 12:57:24 2024 config: NAME STATE READ WRITE CKSUM backupz ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 wwn-0x50014ee215801a06 ONLINE 0 0 0 wwn-0x50014ee215e7129a ONLINE 0 0 0 wwn-0x50014ee215e7155f ONLINE 0 0 0 wwn-0x50014ee26ad538b2 ONLINE 0 0 0 cache /zcache ONLINE 0 0 0 errors: No known data errors markw at raspberrypi:~ $ uname -a Linux raspberrypi 6.6.20+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.6.20-1+rpt1 (2024-03-07) aarch64 GNU/Linux > > As far as I can tell ZFS is a specialized tool, with impressive > features, but rough edges. It is not a smoothly crafted, general purpose > package suited to a general audience. Not at all. I don't understand what issues you've had because I have not seen them, but ZFS is very stable. The only time I have ever had a stability issue was when the underlying storage or cabling was so bad it was virtually unusable and I don't think linux would do well with that on LVM either. > > -kb, the Kent whose Raspberry PI 4 is, at this moment, running a custom > built kernel so it can happily boot and run from a pair of spinning > drives, using Linux SW raid 1, which though limited and doesn't scale to > gigantic disks very well, otherwise works great, across architectures, > even when used in weird ways. Why did you make a custom kernel? Keep the SD card as a boot loader, and use an SSD for your root partition. Everything stays stock. markw at raspberrypi:~ $ df Filesystem 1K-blocks Used Available Use% Mounted on udev 3946080 0 3946080 0% /dev tmpfs 824176 6432 817744 1% /run /dev/sda1 460367736 264830908 172077920 61% / tmpfs 4120800 1312 4119488 1% /dev/shm tmpfs 5120 48 5072 1% /run/lock /dev/mmcblk0p1 522232 75368 446864 15% /boot/firmware backupz 2912676224 709389568 2203286656 25% /backupz tmpfs 824160 144 824016 1% /run/user/1001 tmpfs 824160 160 824000 1% /run/user/1000 backupz/media 4132240512 1928953856 2203286656 47% /backupz/media My root is a 460G SSD markw at raspberrypi:~ $ cat /boot/firmware/cmdline.txt console=serial0,115200 console=tty1 root=UUID=e6ec0371-200b-439b-a259-c60f391a56ed rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cfg80211.ieee80211_regdom=US markw at raspberrypi:~ $ blkid /dev/sda1 /dev/sda1: UUID="e6ec0371-200b-439b-a259-c60f391a56ed" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2bf37e9b-01" I use the UUID of the /dev/sda1 file system in the cmdline.txt file, and also put that in /etc/fstab. No mods to the software, hardware, or the need for a custom kernel. > _______________________________________________ > Discuss mailing list > Discuss at driftwood.blu.org > https://driftwood.blu.org/mailman/listinfo/discuss >
- Follow-Ups:
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- References:
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: richard.pieri at gmail.com (Rich Pieri)
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- From: richard.pieri at gmail.com (Rich Pieri)
- [Discuss] Debian 11 -> 12
- From: slitt at troubleshooters.com (Steve Litt)
- [Discuss] Debian 11 -> 12
- From: invalid at pizzashack.org (Derek Martin)
- [Discuss] Debian 11 -> 12
- From: markw at mohawksoft.com (markw at mohawksoft.com)
- [Discuss] Debian 11 -> 12
- From: kentborg at borg.org (Kent Borg)
- [Discuss] Debian 11 -> 12
- Prev by Date: [Discuss] Debian 11 -> 12
- Next by Date: [Discuss] Debian 11 -> 12
- Previous by thread: [Discuss] Debian 11 -> 12
- Next by thread: [Discuss] Debian 11 -> 12
- Index(es):