this post was submitted on 11 Sep 2023
103 points (68.5% liked)

Technology

59719 readers
2645 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] merc 4 points 1 year ago

80% accuracy, that is trash

More than 80% of most codebases is boilerplate stuff: including the right files for dependencies, declaring functions with the right number of parameters using the right syntax, handling basic easily anticipated errors, etc. Sometimes there's even more boilerplate, like when you're iterating over a list, or waiting for input and handling it.

The rest of the stuff is why programming is a highly paid job. Even a junior developer is going to be much better than an LLM at this stuff because at least they understand it's hard, and at least often know when they should ask for help because they're in over their heads. An LLM will "confidently" just spew out plausible bullshit and declare the job done.

Because an LLM won't ask for help, won't ask for clarifications, and can't understand that it might have made a mistake, you're going to need your highly paid programmers to go in and figure out what the LLM did and why it's wrong.

Even perfecting self-driving is going to be easier than a truly complex software engineering project. At least with self-driving, the constraints are going to be limited because you're dealing with the real world. The job is also always the same -- navigate from A to B. In the software world you're only limited by the limits of math, and math isn't very limiting.

I have no doubt that LLMs and generative AI will change the job of being a software engineer / programmer. But, fundamentally programming comes down to actually understanding the problem, and while LLMs can pretend they understand things, they're really just like well-trained parrots who know what sounds to make in specific situations, but with no actual understanding behind it.