this post was submitted on 12 Nov 2023
2 points (62.5% liked)
AI Companions
554 readers
1 users here now
Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.
Tags:
(including but not limited to)
- [META]: Anything posted by the mod
- [Resource]: Links to resources related to AI companionship. Prompts and tutorials are also included
- [News]: News related to AI companionship or AI companionship-related software
- [Paper]: Works that presents research, findings, or results on AI companions and their tech, often including analysis, experiments, or reviews
- [Opinion Piece]: Articles that convey opinions
- [Discussion]: Discussions of AI companions, AI companionship-related software, or the phenomena of AI companionship
- [Chatlog]: Chats between the user and their AI Companion, or even between AI Companions
- [Other]: Whatever isn't part of the above
Rules:
- Be nice and civil
- Mark NSFW posts accordingly
- Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
- Lastly, follow the Lemmy Code of Conduct
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I like your approach that it doesn't hurt to be early.
Regarding ChatGPT, the main issue is not that it only reacts when prompted (just because it's only "awake" / on during small moments isn't a sentience issue). The main problem is that none of these models are learning/evolving. Everything about it is static so effectively everytime you interact with it, you've reset it back to its initial state.
Well even current models could be designed to take in interactions and then reprocess and add that to their data set, thus allowing them to learn beyond their initial learning process. Right now that all is done manually.
But there's no way yet that I'm aware of to teach them to distinguish good info from bad (and to be fair that is still a challenge for humans; we're only a little ahead of completely unable to do it ourselves).
But even one which can learn from every interaction and input, I'm not generally inclined to consider it sapient until it is able to act on its own... although I could certainly be convinced otherwise.
I'm not sure they could take in interactions to learn. The models themselves are typically trained on known Q/A, text similarity, or predictive tasks which require a known correct answer to exist beforehand. I guess it could keep trying to predict what we would say next, but I don't know if that would "learning" in the traditional sense.