this post was submitted on 23 Dec 2023
194 points (86.7% liked)

Technology

60256 readers
3383 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (1 children)

In principle subsystems that aren't awareness can also be T3 systems, I suspect that at least from the motor cortex, mine does seem to have gotten more effective at learning from moment to moment, meaning it learned how to learn better and that's T3. At least I think it's not just me learning to not micro-manage it as much, it's very hard to be sure about any of this, too many intersecting possibilities.

From the cybernetic/information theory side we don't really know how these kinds of systems work in the first place, we're barely getting started understanding T2 systems. All the AI tech we have is basically ways to breed fruit flies to fly left or right when seeing certain patterns, with enough computing power thrown at it to look impressive. We already had that kind of tech in the 50s (first implementations 54 for genetic algorithms, 57 for the perceptron), of course less impressive.