this post was submitted on 24 Nov 2023
1 points (100.0% liked)

Data Hoarder

24 readers
1 users here now

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

founded 10 months ago
MODERATORS
 

I'm trying to figure out what could be causing the bottleneck on my NAS. By my calculations I should be able to easily hit over 1000 MB/sec sustained read but I don't know why I'm not. Any useful troubleshooting advice would be appreciated.

System:

OS: TrueNAS-SCALE-22.12.2

CPU: Threadripper 1950X

RAM: 128 GB DDR4

Network: 10 Gbit (Intel X710-DA2 -> Mikrotik CRS317-1G-16S+RM -> AQUANTIA AQC107)

Controller: Adaptec PMC ASR-72405

Drives: 8 Seagate Exos X20

ZFS Pool Config: 2 VDevs (4 drives each) in RAIDZ1

SMB Share Benchmark

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 10 months ago (1 children)

Run iozone to test pool.

Small tests will reside in ARC and give you false high. 512gb is minimum, or change tunable to reduce arc size

Iperf to test network.

FYI Scale has a bug limiting ARC to half your physical ram. Apparently there is beta test for fix posted on TrueNas forum. And should be in production in 6 months-ish. This won't impact synthetic tests, but in real life you may want to have the extra cache

[–] [email protected] 1 points 10 months ago

Why his numbers are right on the money. Need another vdev to break the 1k mark. Especially if looking for consistent over 1k numbers.

[–] [email protected] 1 points 9 months ago

I ran an iperf3 test and noticed that my network connection seemed to be maxing out at 6 Gb/sec (750 MB/sec). So I changed NICs on the client machine to use the onboard 10 Gbit NIC (Marvell AQtion) instead of the Cisco/Intel X710-DA. It seems like it was a NIC issue with the X710-DA because now I’m getting way faster speeds. So at this point I’m wondering if it’s a driver issue with the X710-DA, the SFP+ module, or the NIC itself. But at least I know now what was causing the bottleneck.

Before (Cisco/Intel X710-DA):

[SUM] 0.00-10.00 sec 7.07 GBytes 6.07 Gbits/sec sender [SUM] 0.00-10.00 sec 7.07 GBytes 6.07 Gbits/sec receiver

After NIC Switch (Marvell AQtion):

[SUM]   0.00-10.00  sec  11.0 GBytes  9.48 Gbits/sec                  sender

[SUM] 0.00-10.00 sec 11.0 GBytes 9.48 Gbits/sec receiver