Give it a full read, which you should anyway to check your backups?
Data Hoarder
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
Uptalking?
Yeah, I should probably. An idea I had was to run a manual check of the latest time machine backup against the data partition. This is on a mac.
That would work. Actually if it's a constellation that supports TRIM (OS-Filesystem-whatever it sees on the USB - see this to get an idea how complex things can get) reading the saved backup might be equivalent to reading the whole SSD. Even if you used only 64 GBs of 1TB if the rest is TRIMed nothing (more) would be "really" read even if you do a full badblocks (or dd to /dev/null or any other full read test). Sure, it'll take a while to feed 900+ GBs of zeroes (or whatever the TRIMed sectors return) over USB but not much will be really read from the SSD.
That's really complex, so dd isn't really reading the whole thing regardless? Gosh!
Sorry what is a full read? On MacOS specifically
There is no gui way of doing this afaik, I'd guess it involves doing some kind of dd > /dev/null
That depends on the controller firmware.
But if you want to be sure, just do a read-only badblocks test (if you use Linux). That'll force the controller to read all blocks and (hopefully) rewrite those it finds to be weak.
It's mac actually, maybe should have mentioned that. Not sure what the best way here is to "read all blocks" of a drive. Maybe a dd command > /dev/null?
I would be impossible to guess without a knowledge of internal working of a particular SSD. For a NAND-specific file system I've implemented (not SSD but a device using raw SLC NAND) there was a block refresh immediately after ECC error detection at read and also background process checking slowly all the pages in use (one week for a full cycle). Background scan was starting each time after powering on from a randomized point.
Make sense. I guess leaving it idle for some time should be part of the routine. Then again, there's a limit to how far one can go. If the routine ended up being "power up the drive and use it actively for 4 weeks at least" it would just become too much.
I wish there was just a simple feature to click and a progress bar showing that just did this without us having to try figuring things out.