this post was submitted on 02 Jun 2025
462 points (96.6% liked)

Technology

71083 readers
1912 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 108 points 1 week ago (6 children)

Having been burned many times in the past, I won't even trust 40 GB to a Seagate drive let alone 40 TB.

Even in enterprise arrays where they're basically disposable when they fail, I'm still wary of them.

[–] [email protected] 53 points 1 week ago (4 children)

Still, it's a good thing if it means energy savings at data centers.

For home and SMB use there's already a notable absence of backup and archival technologies to match available storage capacities. Developing one without the other seems short sighted.

[–] [email protected] 27 points 1 week ago (3 children)

I still wonder, what's stopping vendors from producing "chonk store" devices. Slow, but reliable bulk storage SSDs.

Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).

I just want something for luke-warm storage that didn't require a datacenter and/or 500W continuous power draw.

[–] aBundleOfFerrets 13 points 1 week ago (1 children)

Cost. The speed of flash storage is an inherent quality and not something manufacturers are selecting for typically. I assure you if they knew how to make some sort of Super MLC they absolutely would.

[–] [email protected] 13 points 1 week ago

It's not inherent in terms of "more store=more fast".

You could absolutely take older, more established production nodes to produce higher quality, longer lasting flash storage. The limitation hardly ever is space, but heat. So putting that kind of flash storage, with intentionally slowed down controllers, into regular 2.5 or even 3.5" form factors should be possible.

Cost could be an issue because the market isn't seen as very large.

[–] [email protected] 7 points 1 week ago (1 children)

they make bulk storage ssds with QLC for enterprise use.

https://youtu.be/kBTdcdJC_L4

The reason why they're not used for consumer use cases yet is because raw nand chips are still more expensive than hard drives. People dont want to pay $3k for a 50tb SSD if they can buy a $500 50tb hdd and they don't need the speed.

For what it's worth, 8tb TLC pcie3 U.2 SSDs are only $400 used on ebay these days which is a pretty good option if you're trying to move away from noisy slow hdds. 4 of those in raid 5 plus a diy nas would get you 24tb of formatted super fast nextcloud/immich storage for ~$2k.

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 17 points 1 week ago

My first seagate HD started clicking as I was moving data to it from my older drive just after I purchased it. This was way back in the 00s. In a panic, I started moving data back to my older hd (because I was moving jnstead of copying) and then THAT one started having issues also.

Turns out when I overclocked my CPU I had forgotten to lock the PCI bus, which resulted in an effective overclock of the HDD interfaces. It was ok until I tried moving mass amounts of data and the HDD tried to keep up instead of letting the buffer fill up and making the OS wait.

I reversed the OC and despite the HDDs getting so close to failure, both of them lasted for years after that without further issue.

[–] [email protected] 11 points 1 week ago (1 children)

Same here. Been burned by SSD's too though - a Samsung Evo Pro drive crapped out on me just months after buying it. Was under warranty and replaced at no cost, but I still lost all my data and config/settings.

[–] [email protected] 19 points 1 week ago

Any disk can and will fail at some point in time. Backup is your best friend. Some sort of disk redundancy is your second best friend.

[–] [email protected] 10 points 1 week ago* (last edited 1 week ago) (1 children)

I feel the exact same about WD drives and I'm quite happy since I switched to Seagate.

[–] [email protected] 19 points 1 week ago* (last edited 1 week ago) (1 children)

Don’t look at Backblaze drive reports then. WD is pretty much all good, Seagate has some good models that are comparable to WD, but they have some absolutely unforgivable ones as well.

Not every Seagate drive is bad, but nearly every chronically unreliable drive in their reports is a Seagate.

Personally, I’ve managed hundreds of drives in the last couple of decades. I won’t touch Seagate anymore due to their inconsistent reliability from model to model (and when it’s bad, it’s bad).

[–] [email protected] 10 points 1 week ago (1 children)

Don’t look at Backblaze drive reports then

I have.

But after personally having suffered 4 complete disk failures of WD drives in less then 3 years, it's really more like a "fool me once" situation.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago) (1 children)

It used to be pertinent to check the color of WD drives. I can't remember all of them but of the top of my head I remember Blue dying the most. They used to have black, red and maybe a green model, now they have purple and gold as well. Each was designated for certain purposes / reliability.

Source: Used to be a certified Apple/Dell/HP repair tech, so I was replacing hard drives daily.

[–] [email protected] 6 points 1 week ago (3 children)

Gold is the enterprise ones. Black is enthusiast, blue is desktop, red is NAS, purple is NVR, green is external. Green you almost certainly don't want (they do their own power management), red is likely to be SMR. But otherwise they're not too different. If you saw a lot of blues failing, it's probably because the systems you supported used blue almost exclusively.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 85 points 1 week ago (5 children)

So all the other hard drives will be cheaper now, right? Right?

load more comments (5 replies)
[–] [email protected] 64 points 1 week ago

That's pretty impressive a couple of those and you could probably download the next Call Of Duty.

[–] [email protected] 61 points 1 week ago (4 children)

Incoming 1Tb videogames. Compression? Who the fuck needs compression.

[–] [email protected] 40 points 1 week ago* (last edited 6 days ago) (11 children)

Black ops 6 just demanded another 45 GB for an update on my PS5, when the game is already 200 GB. AAA devs are making me look more into small indie games that don’t eat the whole hard drive to spend my money on, great job folks.

E) meant to say instead of buying a bigger hard drive I’ll support a small dev instead.

[–] [email protected] 19 points 1 week ago (1 children)

That is absolutely egregious. 200GB game with a 45GB update? You'd be lucky to see me installing a game that's around 20-30GB max anymore because I consider that to be the most acceptable amount of bloat for a game anymore.

load more comments (1 replies)
[–] [email protected] 7 points 1 week ago (1 children)

I arrived at that point a few years ago. You're in for a world of discovery. As an fps fan myself I highly recommend Ultrakill. There's a demo so you don't have to commit.

load more comments (1 replies)
load more comments (9 replies)
[–] sugar_in_your_tea 13 points 1 week ago (4 children)

Oh, they'll do compression alright, they'll ship every asset in a dozen resolutions with different lossy compression algos so they don't need to spend dev time actually handling model and texture downscaling properly. And games will still run like crap because reasons.

load more comments (4 replies)
[–] [email protected] 5 points 1 week ago

Optimizations are relics of the past!

[–] [email protected] 5 points 1 week ago (3 children)

I don't know about that. These are spinning disks so they aren't exactly going to be fast when compared to solid state drives. Then again, I wouldn't exactly put it past some of the AAA game devs out there.

load more comments (3 replies)
[–] [email protected] 57 points 1 week ago (1 children)

Why in the world does this seem to use an inaccurate depiction of the Xbox Series X expansion card for its thumbnail?

[–] [email protected] 12 points 1 week ago

This picture: brought to you by some bullshit AI

[–] [email protected] 25 points 1 week ago (1 children)

I’ll finally have enough space for my meme screenshots.

[–] [email protected] 12 points 1 week ago

Or the 8k photos of vacation dinners.

[–] [email protected] 24 points 1 week ago (3 children)

Oh wow does it come with glowing green computery looking stuff like in the picture

[–] [email protected] 20 points 1 week ago* (last edited 1 week ago) (1 children)

I do like that the picture on an article about a 40 TB drive is clearly labelled as 1 TB. Like couldn't they have edited the image?

[–] [email protected] 5 points 1 week ago (5 children)

I've been buying computer stuff for like 30 years and never once has any of it had any weird glowing stuff like on the box

load more comments (5 replies)
load more comments (2 replies)
[–] [email protected] 15 points 1 week ago (2 children)

i remember bragging when my computer had 40gb storage

[–] [email protected] 9 points 1 week ago (1 children)

I bought my first HDD second hand. It was advertised as 40MB. But it was 120MB. How happy was young me?

[–] [email protected] 3 points 1 week ago

Upgrading from 20MB to 40MB was so fucking boss.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

I remember switching away from floppies to a--much faster, enormous---80MB hard drive. Never did come close to filling that thing.

Today, my CPU's cache is larger than that hard drive.

[–] [email protected] 11 points 1 week ago

thats a lot of ~~porn~~ high quality videos

[–] [email protected] 8 points 1 week ago (4 children)

I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing

raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets

[–] [email protected] 0 points 6 days ago

These are literally only sold by the rack to data centers.

What are you going on about?

[–] [email protected] 16 points 1 week ago* (last edited 1 week ago) (3 children)

You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.

[–] [email protected] 15 points 1 week ago (1 children)

They're also ignoring how many times this conversation has been had...

We never stopped raid at any other increase in drive density, there's no reason to pick this as the time to stop.

[–] [email protected] 4 points 1 week ago (5 children)

Raid 5 is becoming less viable due to the increasing rebuild times, necessitating raid 1 instead. But new drives have better iops too so maybe not as severe as predicted.

load more comments (5 replies)
[–] [email protected] 3 points 1 week ago

Of course, because you don't want to lose the data if one of the drives dies. And backing up that much data is painful.

load more comments (1 replies)
[–] [email protected] 8 points 1 week ago (2 children)

The main issue I see is that the gulf between capacity and transfer speed is now so vast with mechanical drives that restoring the array after drive failure and replacement is unreasonably long. I feel like you'd need at least two parity drives, not just one, because letting the array be in a degraded state for multiple days while waiting for the data to finish copying back over would be an unacceptable risk.

[–] [email protected] 4 points 1 week ago

I upgraded my 7 year old 4tb drives with 14tb drives (both setups raid1). A week later, one of the 14tb drives failed. It was a tense time waiting for a new drive and the 24 hours or so for resilvering. No issues since, but boy was that an experience. I've since added some automated backup processes.

load more comments (1 replies)
[–] [email protected] 5 points 1 week ago

I guess the idea is you'd still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.

load more comments
view more: ‹ prev next ›