this post was submitted on 17 May 2024
502 points (94.8% liked)
Technology
59622 readers
3263 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But this is a deliberate decision, not an inherent limitation. The model could get feedback from the outside world, in fact this is how it's trained (well, data is fed back into the model to update it). Of course we are limiting it to words, rather than a whole slew of inputs that a human gets. But keep in mind we have things like music and image generation AI as well. So it's not like it can't be also be trained on these things. Again, deliberate decision rather than inherent limitation.
We both even agree it's true that it can learn from interacting with the world, you just insist that because it isn't persisting, that doesn't actually count. But it does persist, just not the the new inputs from users. And this is done deliberately to protect the models from what would inevitably happen. That being said, it's also been fed arguably more input than a human would get in their whole life, just condescended into a much smaller period of time. So if it's "total input" then the AI is going to win, hands down.
I'm not ignoring this. I understand that it's the whole argument, it gets repeated around here enough. Just saying it doesn't make it true, however. It may be true, again I'm not sure, but simply stating and saying "full stop" doesn't amount to a convincing argument.
It's not as open and shut as you wish it to be. If anyone is ignoring anything here, it's you ignoring the fact that it went from basically just, as you said, randomly stacking objects it was told to stack stably, to actually doing so in a way that could work and describing why you would do it that way. Additionally there is another case where they asked chat gpt4 to draw a unicorn using an obscure programming language. And you know what? It did it. It was rudimentary, but it was clearly a unicorn. This is something that wasn't trained on images at all. They even messed with the code, turning the unicorn around, removing the horn, fed it back in, and then asked it to replace the horn, and it put it back on correctly. It seemed to understand not only what an unicorn looked like, but what was the horn and where it should go when it was removed.
So to say it just can "generate more words" is something you can accuse us of as well, or possibly even just overly reductive of what it's capable of even now.
There are all kinds of problems with human memory, where we imagine things all of the time. You've ever taken acid? If so, you would see how unreliable our brains are at always interpreting reality. And you want to really trip? Eye witness testimony is basically garbage. I exaggerate a bit, but there are so many flaws with it with people remembering things that didn't happen, and it's so easy to create false memories, that it's not as convincing as it should be. Hell, it can even be harmful by convicting an innocent person.
Every short coming you've used to claim AI isn't real thinking is something shared with us. It might just be inherent to intelligence to be wrong sometimes.
It's exciting either way. Maybe it's equivalent to a certain lobe of the brain, and we're judging it for not being integrated with all the other parts.