this post was submitted on 21 May 2025
298 points (96.6% liked)

Technology

70365 readers
3836 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Absolutely needed: to get high efficiency for this beast ... as it gets better, we'll become too dependent.

"all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,,,"

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 week ago

Its worth it for school essays and prawn jesus though.

[–] [email protected] 4 points 1 week ago (2 children)

Does the article answer the question of what is the footprint of a prompt?

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago)

Basically nothing worth getting angry about

load more comments (1 replies)
[–] mindbleach 4 points 1 week ago (1 children)

Local models cannot be worse than playing a video game.

There's low-VRAM models for video that approach one frame per second... on the kind of mid-range cards that'd have low VRAM. A 30 Hz clip lasting 10 seconds would take about five minutes. When was the last time you played a really fancy-looking game for less than five minutes?

Now creating the models, yeah, that's still lumbering giants burning money. But mostly thanks to Jevon's paradox. How many watt-hours are needed per hand-wavy unit of training has gone down - so they do a lot more of it. And the result is that laptop-sized models today beat datacenter-sized models a year ago.

[–] WhyJiffie 3 points 1 week ago* (last edited 1 week ago) (2 children)

And the result is that laptop-sized models today beat datacenter-sized models a year ago.

that's hardly believable. do you have any statistics on this? is this some special edition of a heavy, high performance gaming laptop, with an external gpu attached, and a datacenter consisting of 2 racks almost filled to half?

[–] taladar 1 points 1 week ago

It is mainly due to the orders of magnitude advances in bullshitting people about AI's capabilities.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›