About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • Anonymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 days ago

    I’ve got raid 6 at the base level and LVM for partitioning and ext4 filesystem for a k8s setup. Based on this, btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

    Additionaly, for my system, btrfs uses more bits per file or something such that I was running out of disk space vs ext4. Yeah, I can go buy more disks, but I like to think that I’m running at peak efficiency, using all the bits, with no waste.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 days ago

      btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

      Well yeah, because it’s supposed to replace those lower levels.

      Also, BTRFS does provide advantages over ext4, such as snapshots, which I think are fantastic since I can recover if things go sideways. I don’t know what your use-case is, so I don’t know if the features BTRFS provides would be valuable to you.

      • Anonymouse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Generally, if a lower level can do a thing, I prefer to have the lower level do it. It’s not really a reason, just a rule of thumb. I like to think that the lower level is more efficient to do the thing.

        I use LVM snapshots to do my backups. I don’t have any other reason for it.

        That all being said, I’m using btrfs on one system and if I really like it, I may migrate to it. It does seem a whole lot simpler to have one thing to learn than all the layers.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 days ago

          Yup, I used to use LVM, but the two big NAS filesystems have a ton of nice features and they expect to control the disk management. I looked into BTRFS and ZFS, and since BTRFS is native to Linux (some of my SW doesn’t support BSD) and I don’t need anything other than RAID mirror, that’s what I picked.

          I used LVM at work for simple RAID 0 systems where long term uptime was crucial and hardware swaps wouldn’t likely happen (these were treated like IOT devices), and snapshots weren’t important. It works well. But if you want extra features (file-level snapshots, compression, volume quotas, etc), BTRFS and ZFS make that way easier.

          • Anonymouse@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            18 days ago

            I am interested in compression. I may give it a try when I swap out my desktop system. I did try btrfs in it’s early, post alpha stage, but found that the support was not ready yet. I think I had a VM system that complained. It is older now and more mature and maybe it’s worth another look.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 days ago

          Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.

          For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.

          Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.

          There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.

          That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.

          • Anonymouse@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            Those are some good points. I guess I was thinking about the hardware. At least where I do RAID, it’s on the controller, so that offloads much of the parity checking and such to the controller and not the CPU. It’s all probably negligible for the apps that I run, but my hardware is quite old, so maybe trying to squeeze all the performance I can is a worthwhile activity.