this post was submitted on 17 Jan 2025
75 points (100.0% liked)

TechTakes

1550 readers
167 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 11 hours ago

The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.

Id say it is worse, as we have more physical presence. We can think it rains, look outside and realize somebody is spraying water on the windows and we were wrong. The LLM can only react to input, and after a correction will apologize, and then you have a high chance it will still talk about how it rains.

We can also actually count and actually understand things, and not just predict what the next most likely word is.

But yes, I don't get from a security perspective people include LLMs in things, also with the whole data flows back into the LLM thing for training a lot of the LLM providers are prob doing.