this post was submitted on 03 Nov 2024
1268 points (99.4% liked)

Fuck AI

1379 readers
28 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 week ago (2 children)

I'm sorry but this has been a thing long before "AI" based results. Scammers always used tricks to end up at the top of search results.

[–] [email protected] 27 points 1 week ago* (last edited 1 week ago) (1 children)

Scammers have been a thing long before writing. That doesn't mean people shouldn't be made aware of new ways to be scammed.

[–] [email protected] 4 points 1 week ago (1 children)

At the top, but that isn't what this post is saying. This is saying that Google's AI gave the scammer answer. Not that they provided a link you could click on, but that Google itself said this is the number.

[–] [email protected] -1 points 1 week ago (1 children)

It's not an AI, it's just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It's still the same principle for both cases.

[–] [email protected] 2 points 1 week ago (1 children)

Regardless of what they call it, they're the ones presenting it. I'm not arguing they can't be tricked. I'm arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That's a pretty big distinction in a whole mess is different ways. Not the least of which is legal.

[–] [email protected] 0 points 1 week ago

I'm sorry but no. It's not Google making that claim, it's just the LLM replying in a confident way because that's how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that's what you're looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you'd do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It's the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user's responsibility to check the actual search result.