this post was submitted on 17 Mar 2025
577 points (96.9% liked)

Technology

66892 readers
4427 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
you are viewing a single comment's thread
view the rest of the comments
[–] ricecooker 4 points 1 day ago (1 children)

People need to understand it's a really well-trained parrot that has no idea what is saying. That's why it can give you chicken recipes and software code; it's seen it before. Then it uses statistics to put words together that usually appear together. It's not thinking at all despite LLMs using words like "reasoning" or "thinking"

[–] mindbleach 1 points 1 day ago (1 children)

It’s not thinking at all

That's a philosophical claim. There's enough abstraction and complexity in these neural networks that their output isn't merely statistically average, it usually makes sense. It's autocomplete that can interpret a garden-path sentence. Or mimic the writing style of HP Lovecraft. Or, occasionally, answer riddles.

There is some set of rules being applied to concepts, here. It's not a Markov chain babbling away without meaning. These models construct whole relevant responses, even when they're wrong.

[–] ricecooker 1 points 14 hours ago (1 children)

We should to define "thinking" before we discuss any further. Thinking is the self-directed process of reasoning, problem-solving, and generating ideas, involving awareness, evaluation, and adaptation beyond reactive pattern recognition.

That said, LLMs predict responses based on context and learned information.

[–] mindbleach 1 points 10 hours ago

That kitchen-sink definition is degreeless. You're drawing a line so distant and steep that every Philosophy 101 question gets a clear answer and that answer is "nope." Brain in a jar? No awareness. Chinese room? Not self-directed. This may overdefine thought to such an extent that being wrong doesn't count. Like if someone has to think twice, the first time was something else.

The second time might not count either, depending on how hard we examine "reactive pattern recognition." Only an explanation of consciousness in terms of unconscious events could possibly explain consciousness.

Thinking is the ability to reason about things. Concrete tools, abstract concepts, whatever. It's all the same process. It differs considerably from person to person. We flat-out do not understand it well enough to pin down how it happens. We have to infer that it has happened, from observed results, the same way using a calculator demonstrates that it's doing math.

LLMs occasionally demonstrate that they're doing thought. The context by which they pick the next word can require reasoning. As a concrete example, they can be given an answer that is wrong, figure it must be a joke, and deliberately make it wrong-er. That's evaluation and adaptation. The model, at runtime, spotted bullshit and inferred a reason for bullshit. Failure modes that merely satisfy grammar rules include trying to justify it anyway, or "Yes, by which I mean no."