this post was submitted on 28 Feb 2025
311 points (99.1% liked)

Technology

63614 readers
2753 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 2 days ago* (last edited 2 days ago) (1 children)

Of course, Altman is referring to chonky enterprise-grade GPUs like those used in the Nvidia DGX B200 and DGX H200 AI platforms—the latter of which OpenAI was the first to take delivery of last year.

You wouldn't be using these for gaming (well, not of the 3D graphics sort).

They run in the tens of thousands of dollars each, as I recall.

Probably more correct to call them "parallel compute accelerator" cards than "GPUs". I don't think that they have a video out, even.

What they do have is a shit-ton of on-board RAM.

EDIT: Oh, apparently those are whole servers containing multiple GPUs.

https://www.trgdatacenters.com/resource/nvidia-dgx-buyers-guide-everything-you-need-to-know/

The NVIDIA DGX B200 is a physical server containing 8 Blackwell GPUs offering 1440GB RAM and 4TB system memory. It also includes 2 Intel CPUs and consumes 14.3kW power at max capacity.

For comparison, the most powerful electric space heater I have draws about a tenth that.

DGX H200 systems are currently available for $400,000 – $500,000. BasePOD and SuperPOD systems must be purchased directly from NVIDIA. There is a current waitlist for B200 DGX systems.

[–] [email protected] 32 points 2 days ago (2 children)

While the GPU's created aren't used for gaming, the wafers that the dies are made with could've been allocated to produce the dies for consumer level graphics cards right?

[–] [email protected] 2 points 1 day ago

So with datacenter GPUs (excellerators is the more accurate term, honestly), historically they were the exact same architecture as nVidia's gaming GPUs (usually about half to a full generation behind. But in the last 5 years or so they've moved to their own dedicated architectures.

But more to your question, the actual silicon that got etched and burned into these datacenter GPUs could've been used for anything. Could've become cellular modems, networking ASICs, SDR controllers, mobile SOCs, etc. etc. but more importantly these high dollar data center GPUs are usually produced on the newest, most expensive process nodes so the only hardware that would be produced would be similarly high dollar, and not like basic logic controllers used in dollar store junk

[–] [email protected] 4 points 2 days ago