phul_colons

joined 1 year ago
 

Here's my setup:

  • Ubuntu Server with ZFS
  • i3-4160, 16GB ECC, SuperMicro X10SL7-F with 6 + 8 (LSI 2308) SATA ports
  • 8x8TB raidz2
  • 4x4TB raidz1
  • Windows 10 Workstation
  • 3700X, 64GB
  • 5x10TB w/StableBit DrivePool
  • 1x14TB
  • 3x3TB
  • 2021 Macbook Pro 16"
  • M1 Max, 64GB
  • 1x12TB, 2x4TB in MacOS RAID Assistant JBOD

So I've got 80TB on the ZFS server, ~70TB on the workstation, and 20TB on the mac. I specifically use 3 different operating systems, 3 different filesystems, and synchronize everything using Syncthing. Data is spread out based on level of redundancy desired (some things have 3 copies, most have 2 copies, and some have 1 copy with a periodic dump of the file tree so I would know what I lost.) Doing this with snapshots on ZFS has made for a very robust system immune to a lot of threats.

The problem is I'm nearing 90% full on all three systems. Do I build another system twice as large as the current largest? Is that the most efficient algorithm for data growth? Would you choose a different multiple? What hardware?

Would you instead just expand the ZFS server somehow with HBAs and another chassis? I wouldn't mind that solution, but I've not found a good case that is just focused on HDDs and fans. Do you know of one?

Thanks for opinions!

[–] [email protected] 1 points 1 year ago

e2ee starts and ends with you.