this post was submitted on 11 Oct 2023
505 points (92.6% liked)

Technology

58011 readers
2829 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

Perhaps, but at best it's still a very basic form of AI, and maybe shouldn't even be called AI. Before things like ChatGPT, the term "AI" meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn't really learn anything, it's just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It's very clever, but it doesn't really think.

[–] [email protected] 1 points 11 months ago (1 children)

I have to disagree with you in the machine learning definition. Sure, the machine doesn't think in those circumstances, but it's definitely learning, if we go by what you describe what they do.

Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved "that ideal set of values."

[–] [email protected] 1 points 11 months ago (1 children)

Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM's incorporate machine learning.

In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI "learns" to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it's learned about apples, parameters which were prescribed by its programming, but it hasn't truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.

A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

Thanks. I understood all that. But my point is that machine learning is still learning, just like machine walking is still walking. Can a human being be much better at walking than a machine? Sure. But that doesn't mean that the machine isn't walking.

Regardless, I appreciate your comment. Interesting discussion.