BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Debian 11 -> 12



> On 5/30/24 11:25, markw at mohawksoft.com wrote:
>> I don't understand why you think ZFS has any more base complexity than
>> something like LVM.
>
> I admit it is a matter of taste that I find zfs ornery. It is trivial,
> but I find it annoying that I can't use mount to mount a zfs volume.
> I've got to use zfs for everything, and tell *it* to mount or umount.
> Except for different things where I have to use zpool to do stuff.

Here, I am really confused because I don't know what you are talking about.

root at snoopy:/home/markw# zfs create -V200G snoopyz/mytest
root at snoopy:/home/markw# mkfs.ext4 /dev/snoopyz/mytest
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done
Creating filesystem with 52428800 4k blocks and 13107200 inodes
Filesystem UUID: fedcb03f-bb8a-4c54-8114-37d7ca1d3374
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

root at snoopy:/home/markw# mount /dev/snoopyz/mytest /mnt/vol01
root at snoopy:/home/markw# df
Filesystem                      1K-blocks       Used  Available Use%
Mounted on
tmpfs                             3277508       2520    3274988   1% /run
/dev/sda2                       205314024  135597396   59214484  70% /
...
...
...
192.168.101.106:/backupz/media 4132242432 1916180480 2216061952  47%
/mnt/media
/dev/zd304                      205314024         28  194811852   1%
/mnt/vol01


It's almost the exact same process of lvcreate, i.e. create logical
volume, format, and mount.

There is an advanced feature where you can create new file systems (not
logical volumes) and have zfs mount thos for you, but you don't need to
use it.

>
> I wanted to use zfs for encrypted external backups (no redundant disk,
> just a single disk, but I wanted to have the checksummed features of zfs
> to assure me my bits weren't rotting), but where my system already knows
> how to deal with other kinds of external backups, I need to manage zfs
> manually, and if I forgot the "export" the pool before unplugging I seem
> to remember it left my computer in an unhappy state. (I thought
> unplugging a disk at the wrong time risked damaging the disk, not
> damaging the running computer.)

Yes, a disk aggregation system leaves bread crumbs on the system to make
loading easier. Haw many times have you needed to use "vgchange -ay" when
moving an LVM disk set to another computer?

As for unplugging a disk that is actively "in use" on linux, that's always
a crap shoot regardless. Don't blame ZFS for that. Try it with an LVM set.


>
> So I wrote a pair of scripts for mounting and unmounting, and I find
> that annoying. Again trivial, but still annoying.
>
>
>> Dude, I had ZFS running on a RPI4 and just upgraded it to an RPI5. Zero
>> issues.
>
> Checking back in my notes I was wrong, it wasn't a crash, I was getting
> IO errors, from both devices, when doing a torture test of copying a
> bunch of files. This was using a powered hub plugged into a Pi 4 fast
> USB port. When the hub was plugged into a slow port it worked, with no
> errors.

I have seen this behavior on USB disks on linux in general. I have a USB-C
enclosure on my RPI5, but I had to use the OEM cable because my enclosure
was always getting checksum errors. The fact that you were getting
checksum errors means ZFS was doing its job. The USB-2 ports are a lot
slower and don't have the errors. Cables matter.

>
> Using the same drives plugged into the some hub plugged into a 64-bit
> Intel notebook, zfs worked, with no IO errors.

Yup.

>
> Maybe the Pi has broken fast USB ports, except the same hub and the same
> drives plugged into the same fast port on the same Pi 4, it works for SW
> raid 1, with no errors.

Try looking at the hub and cables. I've seen these issues, but unrelated
to ZFS.

>
> Maybe the Pi has a subtle fast USB problem that only happens with that
> hub and zfs. Maybe I am not smart enough to use zfs. Clearly zfs works
> in some circumstances, but eventually I ran out of patience and went
> with something that did work for me.

I can tell you RPI4/5 can handle USB drives well as long as you have good
signal quality cables. Some cables are designed primarily for power.

>
>
> Why do I compile my own Raspberry Pi 4 kernel? Because I don't trust SD
> cards, I frequently manage this machine remotely and I want reboots to
> work, so I want to boot from more robust devices. In my current
> configuration if one disk is missing or completely dead the Pi will
> still try the other, boot, and run. (Yes, if the first disk it tries
> only *sorta* works it can certainly still fail to boot.)


The SD cards die quick deaths because they do not have infinite write
capacity. Many RPI linux distributions use the SD cards for logging or
temp file storage. They don't last long that way. I use the SD card as the
boot loader. Very few writes, if any, over time. Mount the SSD as root.
ZFS raid for data.

>
> In the normal case, when things are working, and the computer is up and
> booted, the vfat boot volume and / are both sitting on SW raid 1
> devices. Not only am I booting from a device that is more reliable an SD
> card, I am booting from a redundant pair of them. The stock kernel
> didn't seem capable of this, I think I needed stuff linked that they
> only built as modules, I'd have to check my notes to be sure.
>
> New Pi 4 kernel sources, at tag stable_20240529, happened to come out
> yesterday, so earlier today I rebuilt the kernel with them and installed
> it. I installed it just once, but on top of raid 1, so I now have
> redundant copies. I like that.
>
>
> -kb
>