this post was submitted on 21 Mar 2025
385 points (94.1% liked)
Greentext
5900 readers
1173 users here now
This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.
Be warned:
- Anon is often crazy.
- Anon is often depressed.
- Anon frequently shares thoughts that are immature, offensive, or incomprehensible.
If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
See, that's your problem. You're arguing, with me, about something that was said to you by someone else. Do you realize why I'm questioning your argumentative skills?
Here's a source to a study about AI's accuracy as a search engine. The main use case proposed for LLMs as a tool is indexing a bunch of text, then summarizing and answering questions about it in natural language.
AI Search Has A Citation Problem
Another use is creating or modifying text based on an input or prompt, however, LLMs are prone to hallucinations. Here's a deep dive into what they are, why they occur and the challenges of dealing with them.
Decoding LLM Hallucinations: A Deep Dive into Language Model Errors
I don't know why do I even bother. You are just going to ignore the sources and dismiss them as well.
I'm sorry? You came to me.
Here is how I see it:
--
I don't have the time to read the articles now so I will have to do it later, but hallucinations can definitively be a problem. Asking for code is one such situation where an LLM can just make up functions that does not exist.