this post was submitted on 25 Sep 2023
24 points (90.0% liked)

Selfhosted

38768 readers
106 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've been interested in building a DIY NAS out of an SBC for a while now. Not as my main NAS but as a backup I can store offsite at a friend or relative's house. I know any old x86 box will probably do better, this project is just for the fun of it.

The Orange Pi 5 looks pretty decent with its RK3588 chip and M.2 PCIe 3.0 x4 connector. I've seen some adapters that can turn that M.2 slot into a few SATA ports or even a full x16 slot which might let me use an HBA.

Anyway, my question is, assuming the CPU isn't a bottle neck, how do I figure out what kind of throughput this setup could theoretically give me?

After a few google searches:

  • PCIe Gen 3 x4 should give me 4 GB/s throughput
  • that M.2 to SATA adapter claims 6 ~~GB/s~~ Gb/s throughput
  • a single 7200rpm hard drive should give about 80-160MB/s throughput

My guess is that ultimately, I'm limited by that 4GB/s throughput on the PCIe Gen 3 x4 slot but since I'm using hard drives, I'd never get close to saturating that bandwidth. Even if I was using 4 hard drives in a RAID 0 config (which I wouldn't do), I still wouldn't come close. Am I understanding that correctly; is it really that simple?

top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 11 months ago (1 children)

No matter what adapter you use, you will be limited to the throughput of the PCIe 3 x4 port.

SATA is 6 gigabits per second, not gigabytes. The SATA adapter is only PCIe 3 x2 which would limit the throughput if you used it with SSDs, but it will still have plenty of bandwidth for hard drives.

[–] [email protected] 4 points 11 months ago

SATA is 6 gigabits per second, not gigabytes.

Oh shit. I misread the Amazon description. Thanks for catching that and thanks for your response

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago)

Actually...for a NAS, your network link is your limit.

You could have 4xPCIe5 M.2's in full-raid, saturating your bus w/64Gb/s of glory, but if you are on 1Gb/s wifi, that's what you'll actually get.

Still, would be fun to ssh in and dupe 1TB in seconds, just for the giggles. Do it for the fun!

Remember, it is almost always cheaper and fast enough to use a Thunderbolt / high-speed USB4/40Gbs flash drive for a quick backup.

[–] [email protected] 4 points 11 months ago

Just for some real-world comparison, I set up a new NAS earlier this year using a rackserver, SAS cards, and eight 18TB HDDs configured like RAID6 (actually using zfs-z2). I played with a few different configurations but ultimately my write speeds reached around 480MB/s because of the parallel access to so many drives. Single drive access was of course quite a bit slower. Because of this testing I knew I could use cheap SATA2 backplanes without affecting the performance.

So basically, do a lot of testing with your planned hardware to get the best throughput, but a single HDD is going to be your single biggest bottleneck in anything you set up.

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
PCIe Peripheral Component Interconnect Express
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage

5 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #166 for this sub, first seen 25th Sep 2023, 21:35] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 3 points 11 months ago (1 children)

yeah you've got it about right. Gen 3x4 is 8gb/s*4 == 4GB/s, which is your bottleneck. Hard drives might be closer to ~200-250MB/s each depending on your specific model. That M.2 -> SATA thing seems like it's more geared towards SATA SSDs with how few ports it has - I wouldn't be surprised if you could find something with more ports available if needed, or at least for a cheaper price.

Also as you note, RAID0 will be the fastest config but depending on your RAID configuration or workload you'll probably be getting less than max bandwidth out of each drive anyway.

[–] [email protected] 3 points 11 months ago (1 children)

I wouldn't be surprised if you could find something with more ports available if needed, or at least for a cheaper price.

Based on another comment I read, each SATA port would be 6 gigabits/s which equates to 0.75 gigabytes/s. If I fully saturated all 5 ports, that puts the throughput at 3.75 gigabytes/s. Anything over 5 ports would be bottlenecked by the M.2 PCIe Gen 3 x4 port wouldn’t it?

[–] [email protected] 2 points 11 months ago (1 children)

Yeah but you're not going to saturate each SATA port with your harddrives, which will be closer to 2 gbps max. The PCIE connector only needs to worry about what actually goes across it. I imagine that card is built to spec with the situation you're describing in mind, but for more practical purposes I think it shouldn't be a problem to have even more slots?

[–] [email protected] 1 points 11 months ago

Oh that’s right, I didn’t take into account the speed of the hard drives. Sweet

[–] [email protected] 3 points 11 months ago

Yup. That simple.