If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again. So, no, it’s only productive if you are in a fucked-up environment where changes bring more breakage than they fix.
It’s useful if you don’t plan to do the same thing again, though. So if you are just trying random stuff, yeah, go ahead.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again
Sure, but nobody’s likely to do that. If I wiped my system now, I doubt I could get it back to exactly the same state if I tried. There are way too many moving parts. There are changes I’ve forgotten I ever applied, or only applied accidentally. And there are things I’d do differently if I had the chance to start over (like installing something via a different one of the half-dozen-or-so methods of installing packages on my distro).
For example, I have Docker installed because I once thought a problem I had might have been Podman-specific. Turned out it was not. But I never did the surgery necessary to fully excise Docker. I probably won’t bother unless and until there is a practical reason to.
This is also available with BTRFS. Personally I am leveraging this feature via Snapper, simply because it was the default on OpenSuse and was good enough that I never bothered looking into alternatives. I’ve heard good things about Timeshift, too.
This has saved my butt a couple times. I’ll never go back to a filesystem that doesn’t support snapshots.
I really liked ZFS when I used it many years ago, but eventually I decided to move to BTRFS since it has built-in kernel support. I miss RAIDZ, though. :(
BTRFS is a damn good option too. I’m happy to hear how easy it is to use. I haven’t used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?
I’m working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I’ve decided. It may not be a good idea, it may not even be feasible. But I’m heckin willing to give it a shot.
I’m actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I’m simply uninformed.
Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I’ve heard about. I’m curious how this will be affected if I have docker running inside a VM.
Edit: Another interesting thing to note between the two file systems is deduplication. ZFS supports automatic deduplication (although it requires a lot of memory). BTRFS supports deduplication but does not have built-in automatic dedup. You can use external tools to perform either file-level or block-level deduplication on BTRFS volumes: https://btrfs.readthedocs.io/en/latest/Deduplication.html
I may have to resort to using BTRFS for this host eventually if ZFS fails me. I do not expect a lot of duplication on a host, even if I have it, who cares I have 60 TB despite the raid 10 architecture. Having something with kernel support may be a better approach anyways.
It’s interesting to me that it struggles with raid 5 and 6 though. I would have expected that to be easy to provide.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again. So, no, it’s only productive if you are in a fucked-up environment where changes bring more breakage than they fix.
It’s useful if you don’t plan to do the same thing again, though. So if you are just trying random stuff, yeah, go ahead.
Sure, but nobody’s likely to do that. If I wiped my system now, I doubt I could get it back to exactly the same state if I tried. There are way too many moving parts. There are changes I’ve forgotten I ever applied, or only applied accidentally. And there are things I’d do differently if I had the chance to start over (like installing something via a different one of the half-dozen-or-so methods of installing packages on my distro).
For example, I have Docker installed because I once thought a problem I had might have been Podman-specific. Turned out it was not. But I never did the surgery necessary to fully excise Docker. I probably won’t bother unless and until there is a practical reason to.
Try Root on ZFS.
If you run into an issue suddenly, you can restore to snapshot.
Good advice!
This is also available with BTRFS. Personally I am leveraging this feature via Snapper, simply because it was the default on OpenSuse and was good enough that I never bothered looking into alternatives. I’ve heard good things about Timeshift, too.
This has saved my butt a couple times. I’ll never go back to a filesystem that doesn’t support snapshots.
I really liked ZFS when I used it many years ago, but eventually I decided to move to BTRFS since it has built-in kernel support. I miss RAIDZ, though. :(
BTRFS is a damn good option too. I’m happy to hear how easy it is to use. I haven’t used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?
I’m working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I’ve decided. It may not be a good idea, it may not even be feasible. But I’m heckin willing to give it a shot.
I’m actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I’m simply uninformed.
ZFS Performance Sauce
Install Ubuntu 24.04 on ZFS RAID 10 - Github Repository
Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I’ve heard about. I’m curious how this will be affected if I have docker running inside a VM.
BTRFS can work across multiple disks much like ZFS. It supports RAID 0/1/10 but I can’t tell you about performance relative to ZFS.
Just be sure you do NOT use BTRFS’s RAID5/6. It’s notoriously buggy and even the official docs warn that it is only for testing/development purposes. See https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
Edit: Another interesting thing to note between the two file systems is deduplication. ZFS supports automatic deduplication (although it requires a lot of memory). BTRFS supports deduplication but does not have built-in automatic dedup. You can use external tools to perform either file-level or block-level deduplication on BTRFS volumes: https://btrfs.readthedocs.io/en/latest/Deduplication.html
Thanks for the insight!
I may have to resort to using BTRFS for this host eventually if ZFS fails me. I do not expect a lot of duplication on a host, even if I have it, who cares I have 60 TB despite the raid 10 architecture. Having something with kernel support may be a better approach anyways.
It’s interesting to me that it struggles with raid 5 and 6 though. I would have expected that to be easy to provide.