LLM based chatbots have a lot of well documented shortfalls, and more generally, what is being promised is not what is being observed. If you also consider that none of the big players in the AI space are offering their product at a price where they could ever hope to break even, it's reasonable to assume that a market correction is incoming.
Saledovil
Just like NFTs.
That does not follow. I can't speak for you, but I can tell if I'm involved in a conversation or not.
It allows us to conclude that an LLM doesn't "think" about what it is saying. Based on the mechanics, the LLM doesn't even know it's a participant in the conversation.
Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.
The burden of proof is on those who say that LLMs do think.
Probably not quite what you're looking for, but there's Anki, a software for flashcards. It has some shared decks available for download, and you can make and potentially share your own. It can also be used to study things besides languages.
Maybe there were cookies from the original account on both devices?
To clarify something, I don't believe that current AI chatbots are sentient in any shape or form, and as they are now, they'll never be. There's at least one piece missing before we have sentient AI, and until we have that, making the models larger won't make the sentient. The LLM chat bots take the text, and calculate which words are how likely to follow onto that. Then based on these probabilities, a result is picked at random. Which is the reason for the hallucinations that can be observed. It's also the reason why the hallucinations will never go away.
The AI industry lives on speculative hype, all the big players are loosing money on it. Hence the existence of people saying that AI can become god and kill us all helps further that hype. After all, if it can become a god, then all we need to do is tame said god. Of course, the truth is that it currently can't become a god, and maybe the singularity is impossible. As long as no government takes the AI doomers seriously, they provide free advertisement.
Hence AI should be opposed on the basis that its unreliable and wasteful, not that it's an existential threat. Claiming that current AI is an existential threat fosters hype which increases investment, which in turn results in more environmental damage from wasteful energy usage.
Judge finds that anthropic has to pay restitution to the reddit users. Affirms that posts belong to users.
Well, I can dream.
Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that’s lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. [emphasis added]
No, they're concerned about AI becoming sentient, taking over the world, and killing us all. This in turn, makes them little different from the people pushing for unlimited AI development, as the only difference between those two groups is that the latter believes they'll be able to control the super intelligence.
If you look at their sources, they most prominently feature surveys of people who overestimate what we currently call AI. Other surveys are flat out misrepresented. The survey for a 25% chance that we'll reach AGI in 2025 State of AI engineering admits that for P(doom), they didn't define 'doom', nor the time frame of said doom. So, basically, if we die out because we all fap to AI images of titties instead of getting laid, that counts as AI induced doom. Also, on said survey, 10% answered 0% chance, with 0% being the one of the only precise option offered on the survey, most other options covering ranges of 25 percentage points each. The other precise option was 100%.
Basically, those guys are useful idiots for the AI industry, pushing a narrative not to dissimilar from the one pushed by the AI boosters. Don't support them.
It has no memory, for one. What makes you think that it does know its in a conversation?