this post was submitted on 02 Oct 2024
335 points (91.6% liked)
Technology
59598 readers
3320 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
See the rest of my post: the people who are making it and why they're making it.
I have no complaints about the people making LLMs that can spot tumors better than humans can, but I 100% agree with every single one of your points. The grifters and the AI fad of venture capitalism are ruining a useful technology and ruining the world and society along with it for a quick buck.
Are they though? LLMs specifically? Seems like a very strange use case for an LLM.
But yeah we're mostly in accordance, I wanted to riff a little bit because as a long-time tech worker I actually do have some bones to pick with the tech itself. The in-exactitude of its output and the "let the prompter beware" approach to dealing with its obvious inadequacies pisses me off and it seems like the perfect product for the current "test in production" "MVP (minimally viable product)" "pre-order the incomplete version" state software is in generally. The marketing and finance assholes are nearly fully running the show at this point and it's evident.
I think the usefulness of this particular technology (LLMs) is very overblown and I found its very early usages more harmful than helpful (i.e. autocorrect/autocomplete is wrong for me more often than it is right). It has decent applicability in some areas (machine translation for instance is pretty good), but the marketing department got hold of it and so now everything is AI this and AI that.
I think it's basically just another over-hyped technology that will eventually shake out to be used only where it is useful enough to justify its cost. If the company has to show profits at any point it is either going to go the surveillance capitalism ad route, or it'll have to increasingly charge more per query than the gibberish it generates is really worth. I don't see most people paying for ChatGPT long-term so they'll probably have to enshittify further beyond their current (already kind of shitty) state.