282
this post was submitted on 19 Jan 2025
282 points (92.2% liked)
Technology
60725 readers
4024 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What happens if someone (OpenAI) makes an LLM inject spyware into the code? Who would be able to read the code and figure that out if you have no coders?
You wouldn't even have to reach as far as malware. All software has bugs. To think that AI will produce perfect bugfree code because "it's a computer" is laughable. So inevitably there will be a need to debug the code, across servers, filesystems, databases, APIs, you name it. In tens if not hundreds of thousands of lines of code, which might even be compiled. Surprise, an LLM can't do that.
Nope!