this post was submitted on 14 May 2025
939 points (96.3% liked)

Fuck AI

2933 readers
1018 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 27 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

Alright I don’t like the direction of AI same as the next person, but this is a pretty fucking wild stance. There are multiple valid applications of AI that I’ve implemented myself: LTV estimation, document summary / search / categorization, fraud detection, clustering and scoring, video and audio recommendations... "Using AI” is not the problem, "AI charlatan-ing" is. Or in this guy’s case, "wholesale anti-AI stanning". Shoehorning AI into everything is admittedly a waste, but to write off the entirety of a very broad category (AI) is just silly.

[–] [email protected] 49 points 3 weeks ago (3 children)

I don't think AI is actually that good at summarizing. It doesn't understand the text and is prone to hallucinate. I wouldn't trust an AI summary for anything important.

Also search just seems like overkill. If I type in "population of london", i just want to be taken to a reputable site like wikipedia. I don't want a guessing machine to tell me.

Other use cases maybe. But there are so many poor uses of AI, it's hard to take any of it seriously.

[–] [email protected] -3 points 3 weeks ago* (last edited 3 weeks ago)

I guess this really depends on the solution you’re working with.

I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.

When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.

[–] [email protected] -4 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

I don’t think AI is actually that good at summarizing.

It really depends on the type and size of text you want it to summarize.

For instance, it'll only give you a very, very simplistic overview of a large research paper that uses technical terms, but if you want to to compress down a bullet point list, or take one paragraph and turn it into some bullet points, it'll usually do that without any issues.

Edit: I truly don't understand why I'm getting downvoted for this. LLMs are actually relatively good at summarizing small, low-context-necessary pieces of information into bullet points. They're quite literally made as code that interprets the likelihood of text based on an input. Giving it a small amount of text to rewrite or recontextualize is one of its best strengths. That's why it was originally mostly implemented as a tool to reword small isolated sections in articles, emails, and papers, before the technology was improved.

It's when they get to larger pieces of information, like meetings, books, wikipedia articles, etc, that they begin to break down, due to the nature of the technology itself. (context windows, lack of external resources that humans are able to integrate into their writing, but LLMs can't fully incorporate on the same level)

[–] [email protected] 7 points 3 weeks ago (2 children)

Our plant manager likes to use it to summarize meetings (Copilot). It in fact does not summarize to a bullet point list in any useful way. Breakes the notes into a headers for each topic then bullet points The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points. Truly stunning work. Even better with a "Please review the meeting transcript yourself as AI might not be 100% accurate" disclaimer.

Truely worthless.

That being said, I've a few vision systems using an "AI" to recognize product that doesn't meet the pre taught pattern. It's very good at this

[–] [email protected] 2 points 3 weeks ago

This is precisely why I don't think anybody should be using it for meeting summaries. I know someone who does at his job, and even he only uses it for the boring, never acted upon meetings that everyone thinks is unnecessary but the managers think should be done anyways, because it just doesn't work well enough to justify use on anything even remotely important.

Even just from a purely technical standpoint, the context windows of LLMs are so small relative to the scale of meetings, that they will almost never be able to summarize it in its entirety without repeating points, over-explaining some topics and under-explaining others because it doesn't have enough external context to judge importance, etc.

But if you give it a single small paragraph from an article, it will probably summarize that small piece of information relatively well, and if you give it something already formatted like bullet points, it can usually combine points without losing much context, because it's inherently summarizing a small, contextually isolated piece of information.

[–] [email protected] 0 points 3 weeks ago (2 children)

I think your manager has a skill issue if his output is being badly formatted like that. I'd tell him to include a formatting guideline in his prompt. It won't solve his issues but I'll gain some favor. Just gotta make it clear I'm no damn prompt engineer. lol

[–] [email protected] 6 points 3 weeks ago (1 children)

I didn't think we should be using it at all, from a security standpoint. Let's run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I'm not going to attempt to help make it's use better at all. Hopefully if it's trash enough, it'll blow over once no one reasonable uses it. Besides, the man's derided by production operators and non-kool aid drinking salaried folk He can keep it up. Lol

[–] [email protected] 0 points 3 weeks ago (1 children)

Okay, then self host an open model. Solves all of the problems you highlighted.

[–] [email protected] 7 points 3 weeks ago (1 children)

But if the text you're working on is small, you could just do it yourself. You don't need an expensive guessing machine.

Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that's impressive. but also kind of a stupid waste, because I could've just tied them with my hands.

[–] [email protected] 1 points 3 weeks ago

you could just do it yourself.

Personally, I think that wholly depends on the context.

For example, if someone's having part of their email rewritten because they feel the tone was a bit off, they're usually doing that because their own attempts to do so weren't working for them, and they wanted a secondary... not exactly opinion, since it's a machine obviously, but at least an attempt that's outside whatever their brain might currently be locked into trying to do.

I know I've gotten stuck for way too long wondering why my writing felt so off, only to have someone give me a quick suggestion that cleared it all up, so I can see how this would be helpful, while also not always being something they can easily or quickly do themselves.

Also, there are legitimately just many use cases for applications using LLMs to parse small pieces of data on behalf of an application better than simple regex equations, for instance.

For example, Linkwarden, a popular open source link management software, (on an opt-in basis) uses LLMs to just automatically tag your links based on the contents of the page. When I'm importing thousands of bookmarks for the first time, even though each individual task is short to do, in terms of just looking at the link and assigning the proper tags, and is not something that takes significant mental effort on its own, I don't want to do that thousands of times if the LLM will get it done much faster with accuracy that's good enough for my use case.

I can definitely agree with you in a broader sense though, since at this point I've seen people write 2 sentence emails and short comments using AI before, using prompts even longer than the output, and that I can 100% agree is entirely pointless.

[–] [email protected] 18 points 3 weeks ago* (last edited 3 weeks ago)

Its just a statistics game. When 99% of stuff that uses or advertises the use of "AI" is garbage, then having a mental heuristic that filters those out is very effective. Yes you will miss those 1% of useful things, but thats not really an issue for most people. If you need it you can still look for it.

[–] [email protected] 2 points 3 weeks ago (1 children)

I have ADHD and I have to ask A LOT of questions to get my brain around concepts sometimes, often cause I need to understand fringe cases before it "clicks", AI has been so fucking helpful to be able to just copy a line from a textbook and say "I'm not sure what they meen by this, can you clarify" or "it says this, but also this, aren't these two conflicting?" and having it explain has been a game changer for me. I still have to be sure to have my bullshit radar on, but thats solved by actually reading to understand and not just taking the answer as is. In fact, scrutinizing the answer against what I've learned and asking further questions has felt like its made me more engaged with the material.

Most issues with AI are issues with capitalism.

[–] [email protected] 4 points 3 weeks ago

Congratulations to the person who downvoted this

They use a tool to improve their life?! Screw them!


Here’s hoping over the next few years we see little baby-sized language models running on laptops entirely devour the big tech AI companies, and that those models are not only open source but ethically trained. I think that will change this community here.

I get why they’re absolutist (AI sucks for many humans today) but above your post as well you see so much drive-by downvoting, which will obviously chill discussion.

[–] [email protected] -3 points 3 weeks ago

But what about me and my overly simplistic world views where there is no room for nuance? Have you thought about that?

[–] [email protected] -4 points 3 weeks ago* (last edited 2 weeks ago)

Edit for clarity: Don't hate the science behind the tech, hate the people corrupting the tech for quick profit.