this post was submitted on 21 May 2024
130 points (95.1% liked)

Technology

57472 readers
3868 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 6 points 3 months ago (1 children)

Youd be surprised at the level of unthinking hatred around them, but even discarding that Ive seen it said often that LLMs have no internal model of what they are talking about as they are just next word generators. This quite clearly contradicts that interpretation.

[โ€“] [email protected] 4 points 3 months ago* (last edited 3 months ago)

concepts embedded in them

internal model

You used both phrases in this thread, but those are two very different things. It's a stretch to say this research supports the latter.

Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.