TPU does a test where they record the speed while filling up the whole drive, should give you a good idea: https://www.techpowerup.com/review/corsair-mp600-mini-1-tb/6.html
Hardware
A place for quality hardware news, reviews, and intelligent discussion.
The slowdown is write speed, if you read from the drive mostly, don't worry about it much.
I feel this gets stated but I do wonder how true it is as opposed to people basically just assuming due to lack of testing.
Testing reads in this manner and especially accounting for SSD background activity is much more involving and time consuming than the typical write to full tests which are done. So are just assuming writes are affected just because the latter is the common test?
See counter data on reads being affected -
https://www.anandtech.com/show/13512/the-crucial-p1-1tb-ssd-review/6
https://www.anandtech.com/show/13078/the-intel-ssd-660p-ssd-review-qlc-nand-arrives/5
Tom’s Hardware usually does longer write tests with iometer to analyze cache size and performance however I think it may only be a 15 minute run. Here’s their review of the 2tb samsung 990
However it’s becoming less of an issue with modern ssd’s that incorporate hybrid caches and faster NAND. As long as the drive has a large enough static cache to absorb most of what you throw at it or fast enough NAND to at least be bearably fast it’s not that big of a deal. You just have to know the NAND’s limitations. Even drives like the crucial p3 are fine as long as you understand eventually writes will slow down to 80MB/s or so.
For secondary storage it’s less important but I would still try to leave some free space on OS drive for fast swap and temp file performance
Out of all the gen4 SSDs, SN850x is the least affected by performance degradation.
https://old.reddit.com/r/hardware/comments/1146b0s/ssd_sequential_write_slowdowns/
I think one of us, including some other comments with links to similar benchmarks, is misunderstanding the conclusion behind a sequential write.
My question is, if a drive is, say, 90% full, how much slower it is compared to 0% full.
The linked test starts with an empty drive and writes data for 60 seconds, which is not enough to fill the drive. If you use the WD numbers as an example, it gets ~6000MB/s for ~35 seconds before the speed plummets. That's 210GB filled for a 1000GB drive (which is explained in their methodology, they are filling 20% of the drive). Here, the speed going down is a result of the cache filling up and forcing the drive to write directly to the flash memory.
In my question, I am assuming that when the drive is 90% full and idle, the cache is not being used, but I could be wrong. But if so, when I start writing the cache should be used as normal, keep the data there temporarily before writing it to flash at a later date. Question is how much slower this entire process is when full, but not when the cache is saturated. I don't think the test answers that.
That's 210GB filled for a 1000GB drive
No, that's 630GB of TLC space filled. Writing in SLC mode requires 3x the space. The dynamic SLC caching stops at some point because the controller still needs space left to rewrite 630GB down to 210GB during idle time plus a safety margin so the user won't run into a situation where the drive has to "freeze" to catch up with the work. There is always some minimal amount of SLC cache available through overprovisioning, but that's typically only a few GB.
I don't keep up with SSD benchmarks, but the mechanism behind this phenomenon is not anything mysterious. Most consumer SSDs ship with TLC or QLC NAND, which supports either 3 or 4 bits of storage per cell. However, writing the full 3 or 4 bits is slower than just writing one bit in each cell, so the drives use available empty NAND as an SLC write cache while the drive is still not full. So when you write to a relatively empty drive, your data will go into the DRAM cache on the controller (if there is any being used as a write buffer) and then get written into available NAND chips in SLC mode. Later on, the drive will consolidate the data down into QLC/TLC properly, but you get the advantage of a fast write so long as there is enough empty NAND to use SLC caching.
Obviously this falls apart once your drive gets close to full and there are no available empty NAND chips to write to as SLC cache. This is also why the write performance of budget drives tends to drop off worse than higher end drives. The nicer drives have faster NAND and usually have DRAM on the SSD controller to help performance in the worst case. Enterprise drives often sidestep this issue entirely via just using SLC or MLC NAND directly, or by having additional overprovisioning (extra NAND chips).
990 Pro 4TB has two 16Tb TLC chips (2TB each),and 442GB SLC cache.
Does that mean the SLC cache is included in that 4TB, or is it separate? Because if it was separate, this would imply that it's there to be used even if the TLC chips are completely filled so the cache speed would not decrease when full, only writing to the TLC flash afterwards.
Unless that's how overprovisioning works? 990 Pro has 370GB overprovisioning within the TLC flash and 442GB SLC cache, together they roughly cancel out to give a total of 4TB capacity, which I guess would explain why cache runs out when the drive is full.
I can't speak to the details of the 990 Pro specifically, I don't keep up with individual drives that closely. I would guess that your understanding is correct, but someone else on the sub can probably chime in with the details for the current crop of high end drives.
It's proportional, and decreases as available space decreases.
look for example at this SSD https://www.techpowerup.com/ssd-specs/samsung-990-pro-2-tb.d862
it has up to 216GB dynamic SLC cache - this is repurposing the free TLC NAND as SLC.
it also has 10GB static SLC cache - this is dedicated, separate to the TLC NAND.
your data will go into the DRAM cache on the controller
The DRAM cache on NVMe SSDs isn't used for user data.
https://www.reddit.com/r/zfs/comments/wapr7a/intel_to_wind_down_optane_memory_business_3d/ii6vb3m/
I just overprovision my SSD's. Sata and Nvme.
I think storage review (on enterprise products) preconditions the drives for a while before testing, and they might be kinda full for testing?
Serve the home also tests against a moderately full SSD. But that's their only set of data.
I just go with /r/NewMaxx ‘s advice
Almost everything you hear is exaggerated. Fill the drive. Leave some room to work with is all.
You can always buy enterprise grade disks if you are concerned about this.
Effect it's still present, but with more spare flash becomes less pronounced
It's also bad for durability as it impairs wear leveling
This is precisely why many people still suggest hdds for stuff that does not need to be as fast.
If it doesn't need to be so fast why would you care about a 10-50% slowdown from peak SSD speeds?
HDD is for cold storage and very large storage needs.
So that you can have an ssd that's at 100% speed...
For the same price you could have much more storage and faster speeds.
Also there are several techs like directstorage for having a gpu stream assets directly from the ssd, why cripple that?
btw hdds are rarely used for cold storage, cold storage means inactive/offline and is normally tape.
Ah, yes, the tape which nobody uses. Right.
Are you going to buy less SSD and an HDD. So that you can use the always severely slower HDD more, so that you don't have to, gasp, lose 1% read speed off the SSD, every day? Or, to not lose some write speed, particularly random writes, (because that's where a penalty would be at at) but without really writing anything? No comment.
If by directstorage you mean the technology that isn't even in meaningful use yet, but also only makes a clear difference on SATA SSD as of now? I have no comment on that one either.
Obviously if you're at 75% and up it'd be a good time to look around for another drive. but sitting there, not daring to fill it up anymore because you're scared? That is pointless, pedantic and stupid.
Also there are several techs like directstorage for having a gpu stream assets directly from the ssd, why cripple that?
why would the reads be crippled?
Does anyone test playing games for long periods of time on external SSD's?
If you don't do much writing or mixed IO (e.g., things like a database), it doesn't really matter how full you want to keep your SSD. Writing slows (and write amplification goes up) as the disk fills because garbage collection has to work harder. As you may well know, NAND media consists of a set of blocks, and each block contains a set of pages. Writes are at the page level, but erases are at the block level. As data is overwritten or trimmed, some data in a given block will no longer be valid. When a block is garbage collected, the penalty lies in the amount of still-valid data that needs to be copied to another block. The fuller you keep the SSD, the more laden each block is with data you still care about, which has to be moved each time garbage collection is invoked. But if you're doing read-mostly work, it probably doesn't matter.
When you overwrite data on your flash disk it doesn't just go to the same location and replace the data like a hard disk, it writes to a new location and saves that location as "valid" for a given block address.
NAND cells are erased when you need to free up space for new writes - but the erase block is large, so much so it's likely that what you're erasing contains data you need to save. You have to move that valid data to a new place and erase the block and this slows the whole operation down.
If you have very little unallocated space on your disk - more and more operations will require this shuffle and this shuffle is less efficient (you can't wait to erase an 'optimal' block because they all contain 90% or more valid data that must be moved). A packed drive is therefore less efficient, but 2 different size drives at the same % fill won't see the same performance loss because the larger one still has more scratch space to work with (most consumer drives set this aside as an SLC 'cache' to enable snappy performance and a more efficient stripe packing, but that cache shrinks as the drive fills.)
Tl;Dr - you don't need to save 800GB of your 4TB drive, but saving 200GB or so will keep you from seeing any performance loss. If you use any consumer SSD in a 24/7 server workload they'll all hit a wall eventually because they're not designed for sustained performance.