I don't care what yall call it, ai, agi, Stacy, it doesn't change the fact it was 100% trained on books tagged as "bed time stories" to tell you a bed time story, it couldn't tell you one otherwise.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
But why? Also, "has free will" is exactly equivalent to "i cannot predict the behavior of this object". This is a whole separate essay, but "free will" is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don't have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn't really make a difference.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I'd say most humans wouldn't pass your test for intelligence, and in fact they're just 3 LLMs in a trenchcoat.
Yeah, the reality is that we've built a Chinese room. And saying "well, it doesn't really understand" isn't sufficient anymore. In a few years are you going to be saying "we're not really being oppressed by our robot overlords!"?
I'm saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn't exist at all.
I don't care what yall call it, ai, agi, Stacy, it doesn't change the fact it was 100% trained on books tagged as "bed time stories" to tell you a bed time story, it couldn't tell you one otherwise.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
https://en.m.wikipedia.org/wiki/Chinese_room
But why? Also, "has free will" is exactly equivalent to "i cannot predict the behavior of this object". This is a whole separate essay, but "free will" is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don't have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn't really make a difference.
But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I'd say most humans wouldn't pass your test for intelligence, and in fact they're just 3 LLMs in a trenchcoat.
Yeah, the reality is that we've built a Chinese room. And saying "well, it doesn't really understand" isn't sufficient anymore. In a few years are you going to be saying "we're not really being oppressed by our robot overlords!"?
I'm saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn't exist at all.