this post was submitted on 23 Nov 2024
76 points (93.2% liked)

Selfhosted

40329 readers
397 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

About a year ago I switched to ZFS for Proxmox so that I wouldn't be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can't downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn't go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

(page 2) 44 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 day ago (2 children)

I've been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn't do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.

[–] [email protected] 2 points 1 day ago

With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.

[–] [email protected] 0 points 1 day ago

Btrfs Raid 10 reportedly is stable

[–] [email protected] 6 points 1 day ago (3 children)

One day I had a power outage and I wasn't able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn't able to boot from it anymore. I was very pissed, lost a whole day of work

[–] [email protected] 2 points 23 hours ago

ACID go brrr

load more comments (2 replies)
[–] [email protected] 8 points 1 day ago (1 children)

The question is how do you get a bad performance with ZFS?

I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

The fourth run (obviously cached) gave me over 3.8 GB/s.

[–] [email protected] -1 points 1 day ago* (last edited 1 day ago) (3 children)

I have never heard of anyone getting those speeds without dedicated high end hardware

Also the write will always be your bottleneck.

[–] [email protected] 4 points 23 hours ago* (last edited 23 hours ago)

This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it's quite busy because it's my home server with a VM and containers.

[–] [email protected] 5 points 1 day ago (1 children)

I have similar speeds on a truenas that I installed on a simple i3 8100

[–] [email protected] 1 points 23 hours ago (1 children)

How much ram and what is the drive size?

I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.

[–] [email protected] 1 points 22 hours ago (1 children)

64 gb of ecc ram (48gb cache used by zfs) with 2tb drives (3 of them)

[–] [email protected] 0 points 14 hours ago (2 children)

Yeah it sounds like I don't have enough ram.

load more comments (2 replies)
[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

I'm seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

  • 169 MB/s write
  • 254 MB/s read

What's your setup?

[–] [email protected] 1 points 1 day ago (1 children)

Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k

The drives are a mix but I get almost the same performance across machines

[–] [email protected] 2 points 23 hours ago (1 children)

It's possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

[–] [email protected] 0 points 14 hours ago (1 children)

Is your machine part of a cluster by chance? Of so, when you do a VM transfer what performance do you see?

load more comments (1 replies)
[–] [email protected] 6 points 1 day ago (1 children)

One time I had a power outage and one of the btrfs hds (not in a raid) couldn't be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.

[–] [email protected] 4 points 1 day ago (1 children)

Was that less than 2 years ago? Were you using kernel 5.15 or newer?

load more comments (1 replies)
[–] [email protected] 1 points 19 hours ago

I am using btrfs on raid1 for a few years now and no major issue.

It's a bit annoying that a system with a degraded raid doesn't boot up without manual intervention though.

Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

[–] [email protected] 5 points 1 day ago

If it didn't give you problems, go for it. I've run it for years and never had issues either.

[–] [email protected] 3 points 1 day ago

Not proxmox-specific, but I've been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it's bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

The only place I don't use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

[–] [email protected] 2 points 1 day ago (2 children)

Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don't have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I'll use ext4 for data too.

I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.

You can benchmark them if you care about performance. You can find plenty of discussion by googling "ext vs xfs vs btrfs" or whichever ones you're considering. They haven't changed that much in the past few years.

[–] WhyJiffie 2 points 1 day ago* (last edited 3 hours ago) (1 children)

but I found I spent way too much time trying to manage RAM and tuning it,

I spent none, and it works fine. what was your issue?

[–] [email protected] 2 points 1 day ago (2 children)

I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.

load more comments (2 replies)
[–] [email protected] 1 points 1 day ago* (last edited 15 hours ago) (4 children)

Proxmox only supports btrfs or ZFS for raid

Or at least that's what I thought

load more comments (4 replies)
[–] [email protected] 2 points 1 day ago (1 children)

Btrfs only has issues with raid 5. Works well for raid 1 and 0. No reason to change if it works for you

[–] [email protected] 1 points 1 day ago

It is stable with raid 0,1 and 10.

Raid 5 and 6 are dangerous

[–] [email protected] 2 points 1 day ago

Using it here. Love the flexibility and features.

[–] [email protected] 2 points 1 day ago

I run it now because I wanted to try it. I haven't had any issues. A friend recommended it as a stable option.

[–] [email protected] 0 points 19 hours ago (1 children)

Raid 5/6, only bcachefs will solve it

[–] [email protected] -1 points 15 hours ago (5 children)

Btrfs Raid 5 and raid 6 are unstable and dangerous

Bcachefs is cool but it is way to new and isn't even part of the kernel as of yet.

load more comments (5 replies)
[–] [email protected] 1 points 1 day ago (1 children)

Do you rely on snapshotting and journaling? If so backup your snapshots.

[–] [email protected] 0 points 1 day ago (1 children)

Why?

I already take backups but I'm curious if you have had any serious issues

[–] [email protected] 2 points 21 hours ago

Are you backing up files from the FS or sre you backing up the snapshots? I had a corrupted journal from a power outage that borked my install. Could not get to the snapshots on boot. Booted into a live disk and recovered the snapshot that way. Would've taken hours to restore from a standard backup, however it was minutes restoring the snapshot.

If you're not backing up BTRFS snapshots and just backing up files you're better off just using ext4.

https://github.com/digint/btrbk

load more comments
view more: ‹ prev next ›