this post was submitted on 22 Jun 2025
477 points (99.0% liked)

Programming

21093 readers
79 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 48 points 2 days ago (4 children)

For instance, if an AI model could complete a one-hour task with 50% success, it only had a 25% chance of successfully completing a two-hour task. This indicates that for 99% reliability, task duration must be reduced by a factor of 70.

This is interesting. I have noticed this myself. Generally, when an LLM boosts productivity, it shoots back a solution very quickly, and after a quick sanity check, I can accept it and move on. When it has trouble, that's something of a red flag. You might get there eventually by probing it more and more, but there is good reason for pessimism if it's taking too long.

In the worst case scenario where you ask it a coding problem for which there is no solution—it's just not possible to do what you're asking—it may nevertheless engage you indefinitely until you eventually realize it's running you around in circles. I've wasted a whole afternoon with that nonsense.

Anyway, I worry that companies are no longer hiring junior devs. Today's juniors are tomorrow's elites and there is going to be a talent gap in a decade that LLMs—in their current state at least—seem unlikely to fill.

[–] [email protected] 5 points 1 day ago

In the worst case scenario where you ask it a coding problem for which there is no solution—it's just not possible to do what you're asking—it may nevertheless engage you indefinitely until you eventually realize it's running you around in circles.

Exactly this, and it's frustrating as a Jr dev to be fed this bs when you're learning. I've had multiple scenarios where it blatantly told me wrong things. Like using string interpolation in a terraform file to try and set a dynamic source - what it was giving me looked totally viable. It wasn't until I dug around some more that I found out that terraform init can't use variables in the source field.

On the positive side it helps give me some direction when I don't know where to start. I use it with a highly pessimistic and cautious approach. I understand that today is the worst it's going to be, and that I will be required to use it as a tool in my job going forward, so I'm making an effort to get to grips when working with it.

[–] [email protected] 8 points 1 day ago (1 children)

I've noticed this too and it's even weirder when you compare it to a physics question. It very consistently tells me when my recent brain fart of an idea is just plain stupid. But it will try eternally to help me find a coding solution even it it just keeps going in circles.

[–] [email protected] 3 points 1 day ago

I think part of this comes down to the format. Physics can often be analogized and can be very conversational when it comes to demonstrating ideas.

Most code also looks pretty similar if you don’t know how to read it and unlike language, the syntax is absolute with no room for interpretation or translation.

I’ve found it’s consistently good if you treat it like a project specification list, including all of your requirements in a list format in the very first message and have it psuedocode the draft along with list what libraries it wants to use and make sure they work how you expect.

There’s some screening that goes into utilizing it well and that only comes with already knowing roughly how to code what you’re trying to make.

[–] [email protected] 11 points 2 days ago

Sadly, the lack of junior devs means my job is probably safe until I am ready to retire. I have mixed feelings about that. On the one hand, yeah for me. On the other sad for the new grads. And sad for software as a whole. But software truely sucks, and has only been enshitifying worse and worse. Could a shake up like this somehow help that? I don't see how, but who knows.

[–] [email protected] 6 points 2 days ago

Sucks for today's juniors, but that gap will bring them back into the fold with higher salaries eventually.

[–] [email protected] 43 points 2 days ago (2 children)

ai is basically just the worst answer on stackexchange

[–] merc 4 points 1 day ago

It's literally the most common answer on stackexchange.

[–] [email protected] 22 points 2 days ago (1 children)

It's a rubber ducky that talks back. If you don't take it seriously, it can reach the level of usefulness just above a wheezing piece of yellow rubber.

[–] Saledovil 3 points 1 day ago (1 children)

They aren't as cute as actual rubber ducks, though.

[–] [email protected] 2 points 1 day ago (1 children)

Actual rubber ducks don't randomly spew bullshit either

[–] [email protected] 3 points 1 day ago

The bullshit is good it triggers the Cunningham's Law in my brain.

Sometimes it's easier to come up with a solution correcting something blatantly wrong than doing it from scratch.

[–] [email protected] 34 points 2 days ago (1 children)

Please babe! Just one more parameter, then it will be AGI!

[–] [email protected] 7 points 2 days ago

Just 1 more kiloton of Uranium.
It will be ready by the time that's depleted.

[–] [email protected] 41 points 2 days ago

I don't think that's a surprise to anyone that has actually used them for more than a few seconds.

[–] atzanteol 31 points 2 days ago (12 children)

The claims that AI will be surpassing humans in programming are pretty ridiculous. But let's be honest - most programming is rather mundane.

[–] [email protected] 14 points 2 days ago (3 children)

Never have I had to implement any kind of ridiculous algorithm to pass tests with huge amounts of data in the least amount of memory, as the competitive websites show.

It has been mostly about:

  • Finding the correct library for a job and understanding it well, to prevent footguns and blocking future features
  • Design patterns for better build times
  • Making sane UI options and deciding resource alloc/dealloc points that would match user interaction expectations
  • cmake

But then again, I haven't worked in FinTech or Big Data companies, neither have I made an SQL server.

[–] [email protected] 7 points 2 days ago (1 children)

Because actually writing code is the least important part of programming.

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago)

I mean, not the least important, it is an important part. But way less than a common person thinks.

load more comments (2 replies)
[–] [email protected] 6 points 2 days ago (7 children)

Well, this kind of AI won't ever be useful as a programmer. It doesn't think. It doesn't reason. It cannot make decisions besides using a ton of computational power and enormous deep neural networks to shit out a series of words that seem like they should follow your prompt. An LLM is just a really, really good next-word guesser.

So when you ask it to solve the Tower of Hanoi problem, great it can do that. Because it saw someone else's answer. But if you ask it to solve it for a tower than is 20 disks high it will fail because no one ever talks about going that far and it flounders. It's not actually reasoning to solve the problem - it's regurgitating answers it has ingested from stolen internet conversations. It's not even attempting to solve the general case because it's not trying to solve the problem, it's responding to your prompt.

That said - an LLM is also great as an interface to allow natural language and code as prompts for other tools. This is where the actually productive advancements will be made. Those tools are garbage today but they'll certainly improve.

load more comments (7 replies)
load more comments (10 replies)
[–] [email protected] 54 points 2 days ago* (last edited 2 days ago) (4 children)

In the ‘Medium’ difficulty category, OpenAI’s o4-mini-high model scored the highest at 53.5%.

This fits my observation of such models. o4-mini-high is able to help me with 80-90% of the problems at work. For the remaining problems, it would come up with a nonsensical solution and no matter how much I prompt it, it would tunnel-vision on that specific approach. It could never second guess itself and realise that its initial solution is completely off the mark, and try an entirely differently approach. That's where I usually step in and do the work myself.

It still saves me time with the trivial stuff though.

I can't say the same for the rest of the LLMs. They are simply no good at coding and just waste my time.

[–] [email protected] 13 points 2 days ago (1 children)

I didn’t see Claude 4 Sonnet in the tests and this is the one I use. And it looks like about the same category as o4 mini from my experience.

It is a nice tool to have in my belt. But these LLM based agents are still very far from being able to do advanced and hard tasks. But to me it is probably more important to communicate and learn about the limitations about these tools to not lose tile instead of gaining it.

In fact, I am not even sure they are good enough to be used to really generate production-ready code. But they are nice for pre-reviewing, building simple scripts that don’t need to be highly reliable, analyse a project, ask specific questions etc… The game changer for me was to use Clojure-MCP. Having a REPL at disposal really enhance the quality of most answers.

[–] [email protected] 4 points 1 day ago

For me, it’s the Claude Code where everything finally clicked. For advanced stuff, sure they’re shit when they left alone. But as long as I approach it as a Junior Developer (breaking down the tasks to easy bites, having a clear plan all the time, steering away from pitfalls), I find myself enjoying other stuff while it’s doing the monkey work. Just be sure you provide it with tools, mcp, rag and some patience.

load more comments (3 replies)
[–] [email protected] 23 points 2 days ago (7 children)

They have their uses. For instance the other day I needed to read some assembly and decompiled C, you know how fun that can be. LLM proved quite good at translating it to english. And really speed up the process.

Writing it back wasn't that good though, just good enough to point in a direction but I still ended up writing the patcher mostly by myself.

load more comments (7 replies)
[–] [email protected] 7 points 2 days ago (1 children)

Fortunately, 90% of coding is not hard problems. We write the same crap over and over. How many different creat an account and signin flows do we really need. Yet there seem to be an infinite amount, and each with it's own bugs.

[–] [email protected] 14 points 1 day ago* (last edited 1 day ago) (1 children)

The hard problems are the only reason I like programming. If 90% of my job was repetitive boilerplate, I'd probably be looking elsewhere.

I really dislike how LLMs are flooding the internet with a seemingly infinite amount of half-broken TODO-app style programs with no care at all for improving things or doing something actually unique.

load more comments (1 replies)
[–] [email protected] 7 points 2 days ago (1 children)

I've found that AI is only good at solving programming problems that are relatively "small picture" — or if it has to do with the basics of a language — anything else that it provides a solution for you will have to re-write completely once you consult with the language's standards and best practices.

[–] [email protected] 5 points 1 day ago

Well, I recently did kind of an experiment, writing a kid game in Kotlin without ever using it. And it was surprisingly easy to do. I guess it helps that I'm fluent in ~5 other programming languages because I could tell what looked obviously wrong.

My conclusion kinda is that it's a really great help if you know programming in general.

[–] [email protected] 13 points 2 days ago (17 children)

About all they are good for is generating boilerplate code. Just far less efficiently than a snippet library.

load more comments (17 replies)
load more comments
view more: next ›