Offtopic: But when I was a kid, I was obsessed with the complex subway rail system in NYC, I keep trying to draw and map it out.
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
The key is identifying how to use these tools and when.
Local models like Qwen are a good example of how these can be used, privately, to automate a bunch of repetitive non-determistic tasks. However, they can spot out some crap when used mindlessly.
They are great for skett hing out software ideas though, ie try a 20 prompts for 4 versions, get some ideas and then move over to implementation.
God, seriously. Recently I was iterating with copilot for like 15 minutes before I realized that it's complicated code changes could be reduced to an if
statement.
Not to be that guy, but the image with all the traintracks might just be doing it's job perfectly.
Engineers love moving parts, known for their reliability and vigor
Might is the important here
It gives you the right picture when you asked for a single straight track on the prompt. Now you have to spend 10 hours debugging code and fixing hallucinations of functions that don't exist on libraries it doesn't even neet to import.
The one on the right prints “hello world” to the terminal
While being more complex and costly to maintain
Depends on the usecase. It's most likely at a trainyard or trainstation.
If you know what you're doing, AI is actually a massive help. You can make it do all the repetitive shit for you. You can also have it write the code and you either clean it or take the pieces that works for you. It saves soooooo much time and I freaking love it.
I knocked off an android app in Flutter/Dart/Supabase in about a week of evenings with Claude. I have never used Flutter before, but I know enough coding to fix things and give good instructions about what I want.
It would even debug my android test environment for me and wrote automated tests to debug the application, as well as spit out the compose files I needed to set up the Supabase docker container and SQL queries to prep the database and authentication backend.
That was using 3.5Sonnet, and from what I've seen of 3.7, it's way better. I think it cost me about $20 in tokens. I've never used AI to code anything before, this was my first attempt. Pretty cool.
I turned on copilot in VSCode for the first time this week. The results so far have been less than stellar. It's batting about .100 in terms of completing code the way I intended. Now, people tell me it needs to learn your ways, so I'm going to give it a chance. But one thing it has done is replaced the normal auto-completion which showed you what sort of arguments a function takes with something that is sometimes dead wrong. Like the code will not even compile with the suggested args.
It also has a knack for making me forget what I was trying to do. It will show me something like the left side picture with a nice rail stretching off into the distance when I had intended it to turn, and then I can't remember whether I wanted to go left or right? I guess it's just something you need to adjust to. Like you need to have a thought fairly firmly in your mind before you begin typing so that you can react to the AI code in a reasonable way? It may occasionally be better than what you have it mind, but you need to keep the original idea in your head for comparison purposes. I'm not good at that yet.
Try Roocode or Cline with the Claude3.7 model. It's pretty slick, way better than Copilot. Turn on Memory Bank for larger projects to reduce the cost of tokens.
I don't mess with any of those in-IDE assistance. I find then very intrusive and they make me less efficient. So many suggestions pop up and I don't like that, and like you said, I get confused. The only time I thought one of them (codium) was somewhat useful is when I asked it to make tests for the file I was on. It did get all the positive tests correct, but all the negative ones wrong. Lol. So, I naturally default to the AI in the browser.
Thanks, it makes me feel relieved to hear I'm not the only one finding it a little overwhelming! Previously, I had been using chatgpt and the like where I would be hunting for the answer to a particularly esoteric programming question. I've had a fair amount of success with that, though occasionally I would catch it in the act of contradicting itself, so I've learned you have to follow up on it a bit.
That's the thing, it's a useful assistant for an expert who will be able to verify any answers.
It's a disaster for anyone who's ignorant of the domain.
Tell me about it. I teach a python class. Super basic, super easy. Students are sometimes idiots, but if they follow the steps, most of them should be fine. Sometimes I get one who thinks they can just do everything with chatgpt. They'll be working on their final assignment and they'll ask me what a for loop is for. Than I look at their code and it looks like Sanscrit. They probably haven't written a single line of code in those weeks.
If you're having to do repetitive shit, you might reconsider your approach.
I've tried this, to convert a large json file to simplified yaml. It was riddled with hallucinations and mistakes even for this simple, deterministic, verifiable task.
Depending on the situation, repetitive shit might be unavoidable
Usually you can solve the issue by using regex, but regex can be difficult to work with as well
Nah, I'm good the way I do things. I have a good pace that has been working out very well for me :)
I've been trying to use aider for this, it seems really cool but my machine and wallet cannot handle the sheer volume of tokens it consumes.
It's taken me a while to learn how to use it and where it works best but I'm coming around to where it fits.
Just today i was doing a new project, i wrote a couple lines about what i needed and asked for a database schema. It looked about 80% right. Then asked for all the models for the ORM i wanted and it did that. Probably saved an hour of tedious typing.
Shhhh! You're not supposed to rock the AI hate boat.
I hate the ethics of it, especially the image models.
But frankly it's here, and lawyers were supposed to have figured out the ethics of it.
I use hosted Deepseek as an FU to OpenAI and GitHub for stealing my code.
I don't understand how build times magically decrease with AI. Or did they mean built?
They mean time to write the code, not compile time. Let's be honest, the AI will write it in Python or Javascript anyway
I think I would more picture planes taking off those railroads when it comes to AI. It tends to hallucinate API calls that don't exist. if you don't go check the docs yourself you will have a hard time debugging what went wrong.
You can instantly get whatever you want, only it’s made from 100% technical debt
That estimate seems a little low to me. It's at least 115%.
even more. The first 100% of the tech debt is just understanding "your own" code.
Im looking forward in the next 2 years when AI apps are in the wild and I get to fix them lol.
As a SR dev, the wheel just keeps turning.
I'm being pretty resistant about AI code Gen. I assume we're not too far away from "Our software product is a handcrafted bespoke solution to your B2B needs that will enable synergies without exposing your entire database to the open web".
It has its uses. For templeting and/or getting a small project off the ground its useful. It can get you 90% of the way there.
But the meme is SOOO correct. AI does not understand what it is doing, even with context. The things JR devs are giving me really make me laugh. I legit asked why they were throwing a very old version of react on the front end of a new project and they stated they "just did what chatgpt told them" and that it "works". Thats just last month or so.
The AI that is out there is all based on old posts and isnt keeping up with new stuff. So you get a lot of the same-ish looking projects that have some very strange/old decisions to get around limitations that no longer exist.
Holdup! You've got actual, employed, working, graduated juniors who are handing in code that they don't even understand?
The AI also enabled some very bad practices.
It does not refactor and it makes writing repetitive code so easy you miss opportunities to abstract. In a week when you go to refactor you're going to spend twice as long on that task.
As long as you know what you're doing and guide it accordingly, it's a good tool.
Yeah, I think personally LLMs are fine for like writing a single function, or to rubber duck with for debugging or thinking through some details of your implementation, but I'd never use one to write a whole file or project. They have their uses, and I do occasionally use something like ollama to talk through a problem and get some code snippets as a starting point for something. Trying to do too much more than that is asking for problems though. It makes it way harder to debug because it becomes reading code you haven't written, it can make the code style inconsistent, and a non-insignifigant amount of the time even in short code segments it will hallucinate a non existent function or implement something incorrectly, so using it to write massive amounts of code makes that way more likely.
without exposing your entire database to the open web until well after your payment to us has cleared, so it's fine.
Lol.
And then 12 hours spent debugging and pulling it apart.
And it still doesn't work. Just "mostly works".
A bunch of superfluous code that you find does nothing.
And if you need anything else, you have to use a new prompt which will generate a brand new application, it's fun!
That's not really how agentic ai programming works anymore. Tools like cursor automatically pick files as "context", and you can manually add them or the whole ckdebase as well. That obviously uses way more tokens though.
It depends. AI can help writing good code. Or it can write bad code. It depends on the developer's goals.
LLMs can be great for translating pseudo code into real code or creating boiler plate or automating tedious stuff, but ChatGPT is terrible at actual software engineering.