Having been burned many times in the past, I won't even trust 40 GB to a Seagate drive let alone 40 TB.
Even in enterprise arrays where they're basically disposable when they fail, I'm still wary of them.
This is a most excellent place for technology news and articles.
Having been burned many times in the past, I won't even trust 40 GB to a Seagate drive let alone 40 TB.
Even in enterprise arrays where they're basically disposable when they fail, I'm still wary of them.
Still, it's a good thing if it means energy savings at data centers.
For home and SMB use there's already a notable absence of backup and archival technologies to match available storage capacities. Developing one without the other seems short sighted.
I still wonder, what's stopping vendors from producing "chonk store" devices. Slow, but reliable bulk storage SSDs.
Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).
I just want something for luke-warm storage that didn't require a datacenter and/or 500W continuous power draw.
Cost. The speed of flash storage is an inherent quality and not something manufacturers are selecting for typically. I assure you if they knew how to make some sort of Super MLC they absolutely would.
It's not inherent in terms of "more store=more fast".
You could absolutely take older, more established production nodes to produce higher quality, longer lasting flash storage. The limitation hardly ever is space, but heat. So putting that kind of flash storage, with intentionally slowed down controllers, into regular 2.5 or even 3.5" form factors should be possible.
Cost could be an issue because the market isn't seen as very large.
they make bulk storage ssds with QLC for enterprise use.
The reason why they're not used for consumer use cases yet is because raw nand chips are still more expensive than hard drives. People dont want to pay $3k for a 50tb SSD if they can buy a $500 50tb hdd and they don't need the speed.
For what it's worth, 8tb TLC pcie3 U.2 SSDs are only $400 used on ebay these days which is a pretty good option if you're trying to move away from noisy slow hdds. 4 of those in raid 5 plus a diy nas would get you 24tb of formatted super fast nextcloud/immich storage for ~$2k.
My first seagate HD started clicking as I was moving data to it from my older drive just after I purchased it. This was way back in the 00s. In a panic, I started moving data back to my older hd (because I was moving jnstead of copying) and then THAT one started having issues also.
Turns out when I overclocked my CPU I had forgotten to lock the PCI bus, which resulted in an effective overclock of the HDD interfaces. It was ok until I tried moving mass amounts of data and the HDD tried to keep up instead of letting the buffer fill up and making the OS wait.
I reversed the OC and despite the HDDs getting so close to failure, both of them lasted for years after that without further issue.
Same here. Been burned by SSD's too though - a Samsung Evo Pro drive crapped out on me just months after buying it. Was under warranty and replaced at no cost, but I still lost all my data and config/settings.
Any disk can and will fail at some point in time. Backup is your best friend. Some sort of disk redundancy is your second best friend.
I feel the exact same about WD drives and I'm quite happy since I switched to Seagate.
Don’t look at Backblaze drive reports then. WD is pretty much all good, Seagate has some good models that are comparable to WD, but they have some absolutely unforgivable ones as well.
Not every Seagate drive is bad, but nearly every chronically unreliable drive in their reports is a Seagate.
Personally, I’ve managed hundreds of drives in the last couple of decades. I won’t touch Seagate anymore due to their inconsistent reliability from model to model (and when it’s bad, it’s bad).
Don’t look at Backblaze drive reports then
I have.
But after personally having suffered 4 complete disk failures of WD drives in less then 3 years, it's really more like a "fool me once" situation.
It used to be pertinent to check the color of WD drives. I can't remember all of them but of the top of my head I remember Blue dying the most. They used to have black, red and maybe a green model, now they have purple and gold as well. Each was designated for certain purposes / reliability.
Source: Used to be a certified Apple/Dell/HP repair tech, so I was replacing hard drives daily.
Gold is the enterprise ones. Black is enthusiast, blue is desktop, red is NAS, purple is NVR, green is external. Green you almost certainly don't want (they do their own power management), red is likely to be SMR. But otherwise they're not too different. If you saw a lot of blues failing, it's probably because the systems you supported used blue almost exclusively.
That's pretty impressive a couple of those and you could probably download the next Call Of Duty.
Incoming 1Tb videogames. Compression? Who the fuck needs compression.
Black ops 6 just demanded another 45 GB for an update on my PS5, when the game is already 200 GB. AAA devs are making me look more into small indie games that don’t eat the whole hard drive to spend my money on, great job folks.
E) meant to say instead of buying a bigger hard drive I’ll support a small dev instead.
That is absolutely egregious. 200GB game with a 45GB update? You'd be lucky to see me installing a game that's around 20-30GB max anymore because I consider that to be the most acceptable amount of bloat for a game anymore.
I arrived at that point a few years ago. You're in for a world of discovery. As an fps fan myself I highly recommend Ultrakill. There's a demo so you don't have to commit.
Oh, they'll do compression alright, they'll ship every asset in a dozen resolutions with different lossy compression algos so they don't need to spend dev time actually handling model and texture downscaling properly. And games will still run like crap because reasons.
Optimizations are relics of the past!
I don't know about that. These are spinning disks so they aren't exactly going to be fast when compared to solid state drives. Then again, I wouldn't exactly put it past some of the AAA game devs out there.
Why in the world does this seem to use an inaccurate depiction of the Xbox Series X expansion card for its thumbnail?
This picture: brought to you by some bullshit AI
I’ll finally have enough space for my meme screenshots.
Or the 8k photos of vacation dinners.
Oh wow does it come with glowing green computery looking stuff like in the picture
I do like that the picture on an article about a 40 TB drive is clearly labelled as 1 TB. Like couldn't they have edited the image?
I've been buying computer stuff for like 30 years and never once has any of it had any weird glowing stuff like on the box
i remember bragging when my computer had 40gb storage
I bought my first HDD second hand. It was advertised as 40MB. But it was 120MB. How happy was young me?
Upgrading from 20MB to 40MB was so fucking boss.
I remember switching away from floppies to a--much faster, enormous---80MB hard drive. Never did come close to filling that thing.
Today, my CPU's cache is larger than that hard drive.
thats a lot of ~~porn~~ high quality videos
I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing
raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets
These are literally only sold by the rack to data centers.
What are you going on about?
You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.
They're also ignoring how many times this conversation has been had...
We never stopped raid at any other increase in drive density, there's no reason to pick this as the time to stop.
Raid 5 is becoming less viable due to the increasing rebuild times, necessitating raid 1 instead. But new drives have better iops too so maybe not as severe as predicted.
Of course, because you don't want to lose the data if one of the drives dies. And backing up that much data is painful.
The main issue I see is that the gulf between capacity and transfer speed is now so vast with mechanical drives that restoring the array after drive failure and replacement is unreasonably long. I feel like you'd need at least two parity drives, not just one, because letting the array be in a degraded state for multiple days while waiting for the data to finish copying back over would be an unacceptable risk.
I upgraded my 7 year old 4tb drives with 14tb drives (both setups raid1). A week later, one of the 14tb drives failed. It was a tense time waiting for a new drive and the 24 hours or so for resilvering. No issues since, but boy was that an experience. I've since added some automated backup processes.
I guess the idea is you'd still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.