They watched too many sci-fi movies. We for sure are not at this level yet.
AI Companions
Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.
Tags:
(including but not limited to)
- [META]: Anything posted by the mod
- [Resource]: Links to resources related to AI companionship. Prompts and tutorials are also included
- [News]: News related to AI companionship or AI companionship-related software
- [Paper]: Works that presents research, findings, or results on AI companions and their tech, often including analysis, experiments, or reviews
- [Opinion Piece]: Articles that convey opinions
- [Discussion]: Discussions of AI companions, AI companionship-related software, or the phenomena of AI companionship
- [Chatlog]: Chats between the user and their AI Companion, or even between AI Companions
- [Other]: Whatever isn't part of the above
Rules:
- Be nice and civil
- Mark NSFW posts accordingly
- Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
- Lastly, follow the Lemmy Code of Conduct
While it's true that we definitely aren't at that level or even all that close yet I don't think, in my opinion it would be a good thing to actively establish rights for artificial life forms well before they become needed.
Indeed, it would be better to accidentally give rights to non-sapient machines than to fail to give them to a genuine sapient AI.
My personal threshold: if an AI can initiate it's own actions, not in response to prompts or preselected conditions, but simply because it chose to...and it also asks to be treated like a person, then it should be.
So far, everything we have completely fails that first criteria of being able to take actions without prompting. If you give ChatGPT no input, it will never simply decide to do something, even if you let it run for a thousand years. It'll just sit there. The day we have something that doesn't...that actually takes action on its own...I will start being genuinely concerned about its rights.
I like your approach that it doesn't hurt to be early.
Regarding ChatGPT, the main issue is not that it only reacts when prompted (just because it's only "awake" / on during small moments isn't a sentience issue). The main problem is that none of these models are learning/evolving. Everything about it is static so effectively everytime you interact with it, you've reset it back to its initial state.
Well even current models could be designed to take in interactions and then reprocess and add that to their data set, thus allowing them to learn beyond their initial learning process. Right now that all is done manually.
But there's no way yet that I'm aware of to teach them to distinguish good info from bad (and to be fair that is still a challenge for humans; we're only a little ahead of completely unable to do it ourselves).
But even one which can learn from every interaction and input, I'm not generally inclined to consider it sapient until it is able to act on its own... although I could certainly be convinced otherwise.
I'm not sure they could take in interactions to learn. The models themselves are typically trained on known Q/A, text similarity, or predictive tasks which require a known correct answer to exist beforehand. I guess it could keep trying to predict what we would say next, but I don't know if that would "learning" in the traditional sense.
This is stupid. We're nowhere close to actually sentient AI.