Reject proprietary LLMs, tell people to "just llama it"
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
Ugh. Don’t get me started.
Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.
Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.
I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.
The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.
AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.
Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.
I gave it a math problem to illustrate this and it got it wrong
If it can’t do that imagine adding nuance
Well, math is not really a language problem, so it's understandable LLMs struggle with it more.
But it means it’s not “thinking” as the public perceives ai
Hmm, yeah, AI never really did think. I can't argue with that.
It's really strange now if I mentally zoom out a bit, that we have machines that are better at languange based reasoning than logic based (like math or coding).
And then google to confirm the gpt answer isn't total nonsense
I've had people tell me "Of course, I'll verify the info if it's important", which implies that if the question isn't important, they'll just accept whatever ChatGPT gives them. They don't care whether the answer is correct or not; they just want an answer.
That is a valid tactic for programming or how-to questions, provided you know not to unthinkingly drink bleach if it says to.
Have they? Don't think I've heard that once and I work with people who use chat gpt themselves
I'm with you. Never heard that. Never.
How long until ChatGPT starts responding "It's been generally agreed that the answer to your question is to just ask ChatGPT"?
I'm somewhat surprised that ChatGPT has never replied with "just Google it, bruh!" considering how often that answer appears in its data set.
just call it cgpt for short
Computer Generated Partial Truths
Sadly, partial truths are an improvement over some sources these days.
Which is still better than "elementary truths that will quickly turn into shit I make up without warning", which is where ChatGPT is and will forever be stuck at.
Meanwhile Google search results:
- AI summary
- 2x "sponsored" result
- AI copy of Stackoverflow
- AI copy of Geeks4Geeks
- Geeks4Geeks (with AI article)
- the thing you actually searched for
- AI copy of AI copy of stackoverflow
Should we put bets on how long until chatgpt responds to anything with:
Great question, before i give you a response, let me show you this great video for a new product you'll definitely want to check out!
Nah, it'll be more subtle than that. Just like Brawno is full of the electrolytes plants crave, responses will be full of subtle product and brand references marketers crave. And A/B studies performed at massive scales in real-time on unwitting users and evaluated with other AIs will help them zero in on the most effective way to pepper those in for each personality type it can differentiate.
"Great question, before i give you a response, let me introduce you to raid shadow legends!"
Google search is literally fucking dogshit and the worst it has EVER been. I'm starting to think fucking duckduckgo (relies on Bing) gives better results at this point.
I have been using Duck for a few years now and I honestly prefer it to Google at this point. I'll sometimes switch to Google if I don't find anything on Duck, but that happens once every three or four months, if that.
We have new feature, use it!
No, its broken and stupid, I prefer old feature.
... Fine!
breaks old feature even harder
I’ve used Google since 2004. I stopped using it this year because as the parent comment points out, it’s all marketing and AI. I like Qwant but it’s not perfect but it functions like a previous version of Google.
I have tried a few replacements for Google but I've yet to find anything remotely as effective for searches about things close to me. Like if I'm looking for a restaurant near me, kagi, startpage, and DDG are not good. Is qwant good for a use case like that? Haven't heard about it before.
I’ve had some success but it goes off of your ISPs server location so for me it’s not very useful.
Last night, we tried to use chatGPT to identify a book that my wife remembers from her childhood.
It didn’t find the book, but instead gave us a title for a theoretical book that could be written that would match her description.
At least it said if it exists, instead of telling you when it was written (hallucinating)
Maybe it’s trying to motivate me to become a writer.
Did you chatgpt this title?
"Infinitively" sounds like it could be a music album for a techno band.
The infinitive is the form of a verb that in English is said “to [x]”
For example, “to run” is the infinitive form of “run.”
OP probably meant “infinitely” worse.
"Did you ChatGPT it?"
I wondered what language this would be an unintended insult in.
Then I chuckled when I ironically realized, it's offensive in English, lmao.
Did you cat I farted it?
Both suck now.
I have to say, look it up online and verify your sources.
"Let's ask MULTIVAC!"
I say, "Just search it." Not interested in being free advertising for Google.
This is why so much research has been going into AI lately. The trend is already to not read articles or source material and base opinions off click bait headlines, so naturally relying on AI summaries and search results will soon come next. People will start to assume any generated response from a 'trusted search ai' is true, so there is a ton of value in getting an AI to give truthful and correct responses all of the time, and then be able to edit certain responses to inject whatever truth you want. Then you effectively control what truth is, and be able to selectively edit public opinion by manipulating what people are told is true. Right now we're also being trained that AI may make things up and not be totally accurate- which gives those running the services a plausible excuse if caught manipulating responses.
I am not looking forward to arguing facts with people citing AI responses as their source for truth. I already know if I present source material contradicting them, they lack the ability to actually read and absorb the material.
GPTs natural language processing is extremely helpful for simple questions that have historically been difficult to Google because they aren't a concise concept.
The type of thing that is easy to ask but hard to create a search query for like tip of my tongue questions.
Google used to be amazing at this. You could literally search "who dat guy dat paint dem melty clocks" and get the right answer immediately.
Chat~gpt~ is this real
This is a story that's been rotating through the media since ChatGPT first released.
I have an unpopular opinion about this headline after seeing the media cycle repeatedly downplay/ignore what Alphabet has been doing in response to OpenAI: Google the search engine is not in direct competition with ChatGPT, but Gemini is, and Alphabet is smart to keep simpler/time-tested search functionality central to Google rather than react strongly and scrap the keyword-based search bar that users understand are comfortable using - especially older users, but I think most people are starting to discover they have a use for both search and LLM chats.
I think there are two product categories here, which first looked like they were going to converge in 2022-2024, but which are now slowly changing course as customers start to comprehend how both are necessary for different purposes.
When I make chats in ChatGPT or Gemini or Claude etc, I am starting to plan them longitudinally so that I can use them over and over for a specific project or query type.
When I turn to a search bar, it's because I really want a proxy for a specific website or between me and whatever weird site has the answer to my specific question. It's not that I want discussion and a chat about it, I just want Google's card-like results with a website index I can read instead of that website's stylized, animated web design on top or popups or malware.
Every time I get sucked into a chat with Bing CoPilot(ChatGPT) when I really only had a web search query, I regret wasting my time talking to the LLM. Almost as a reflex, I've started avoiding it for most things now.