I always hear people saying you need to leave ~20% of the space on your SSD free otherwise you'll suffer major slowdowns. No way I'm buying a 4TB drive and then leaving 800GB free on it, that is ridiculous.
Now obviously I know it's true. I have a Samsung 850 Evo right now that's 87% full, and with a quick CrystalDiskMark test I can see some of the write speeds dropped to about a third of what they are in reviews.
I'm sure that the amount of performance loss varies between drives, which to me would be a big part in deciding what I'd rather buy. AnandTech used to test empty and full drives as part of their testing suite (here, for example), but they don't have any reviews for the more interesting drives that came out in the last couple of years, like 990 Pro, SN850X, or KC3000.
Is there anyone else doing these kinds of benchmarks, for an empty and filled drive? It would be a lot better knowing just how bad filling a drive is instead of throwing 20% of it away (some suggest to keep 50% full at most) as some kind of rule of thumb.
Out of all the gen4 SSDs, SN850x is the least affected by performance degradation.
https://old.reddit.com/r/hardware/comments/1146b0s/ssd_sequential_write_slowdowns/
I think one of us, including some other comments with links to similar benchmarks, is misunderstanding the conclusion behind a sequential write.
My question is, if a drive is, say, 90% full, how much slower it is compared to 0% full.
The linked test starts with an empty drive and writes data for 60 seconds, which is not enough to fill the drive. If you use the WD numbers as an example, it gets ~6000MB/s for ~35 seconds before the speed plummets. That's 210GB filled for a 1000GB drive (which is explained in their methodology, they are filling 20% of the drive). Here, the speed going down is a result of the cache filling up and forcing the drive to write directly to the flash memory.
In my question, I am assuming that when the drive is 90% full and idle, the cache is not being used, but I could be wrong. But if so, when I start writing the cache should be used as normal, keep the data there temporarily before writing it to flash at a later date. Question is how much slower this entire process is when full, but not when the cache is saturated. I don't think the test answers that.
No, that's 630GB of TLC space filled. Writing in SLC mode requires 3x the space. The dynamic SLC caching stops at some point because the controller still needs space left to rewrite 630GB down to 210GB during idle time plus a safety margin so the user won't run into a situation where the drive has to "freeze" to catch up with the work. There is always some minimal amount of SLC cache available through overprovisioning, but that's typically only a few GB.