this post was submitted on 13 Oct 2024
196 points (100.0% liked)

TechTakes

1534 readers
284 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 62 points 2 months ago (6 children)

Did someone not know this like, pretty much from day one?

Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

[–] [email protected] 37 points 2 months ago* (last edited 2 months ago) (7 children)

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of "emergent").

So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.

[–] [email protected] 21 points 2 months ago (1 children)

“Language is a virus from outer space”

[–] [email protected] 9 points 2 months ago

I thought it came from Babylonian writing that recoded the brains and planted the languages.

load more comments (6 replies)
[–] [email protected] 29 points 2 months ago (1 children)

there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

[–] [email protected] 13 points 2 months ago

No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

[–] [email protected] 13 points 2 months ago (3 children)

Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

[–] [email protected] 14 points 2 months ago (1 children)

they do say that, yes. it’s as bullshit as all the other claims they’ve been making

[–] [email protected] 8 points 2 months ago

Which is my point, and forgive me, but I believe is the point of the research publication.

[–] [email protected] 12 points 2 months ago

They say a lot of stuff.

[–] conciselyverbose 11 points 2 months ago* (last edited 2 months ago)

Yes.

But the lies around them are so excessive that it's a lot easier for executives of a publicly traded company to make reasonable decisions if they have concrete support for it.

load more comments (1 replies)