this post was submitted on 28 Jun 2024
913 points (98.7% liked)

Science Memes

11437 readers
1217 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 5 months ago (1 children)

You’ve been speaking with your chest this whole time and now that we’re into the nitty gritty you really just said “The ai does... something!” It’s so general a description that by your measure automated thermostats are engaging in human reasoning when they make it a little bit cooler on a hot day.

You might’ve been oversimplifying on purpose. I just can’t help but think you have no idea how LLMs work outside of this inherently flawed comparison to human thought.

[–] [email protected] 1 points 5 months ago (1 children)

Not OP, but speaking from a fairly deep layman understanding of how LLMs work - all anyone really knows is that capabilities of fundamentally higher orders (like deception, which requires theory of mind) emerged by simply training larger networks. Since we don't have a great understanding of how our own intelligence emerges from our wetware, we're only guessing.

[–] [email protected] 4 points 5 months ago (1 children)

Something that looks like higher order reasoning emerged from training larger networks. At the end of the day it’s still just spicy autocomplete. Theoretically you could give it a large enough dataset to “predict” almost anything with really high accuracy, but all it’s doing is pattern recognition. One could argue that that’s all humans do, but that’s getting more into philosophy and skipping a lot of nuance.

I’m not like, trying to argue with you by the way. Just having a fun time with this line of thought ^^

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

What makes the "spicy autocomplete" perspective incomplete is also what makes LLMs work. The "Attention is All You Need" paper that introduced attention transformers describes a type of self-awareness necessary to predict the next word. In the process of writing the next word of an essay, it navigates a 22,000-dimensional semantic space, And the similarity to the way humans experience language is more than philosophical - the advancements in LLMs have sparked a bunch of new research in neurology.