this post was submitted on 13 Aug 2024
57 points (98.3% liked)
Linux
48731 readers
913 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
ChatGPT will easily make you a basic GUI in Python using tkinter in my case. Can only recommend. It can also explain how those things work etc.
Hmmm .. 🤔 The best way not to make friends with somebody with over 30 years of coding experience: suggest him to use ChatGPT to write a computerprogram 🤣🤣
It is far more efficient to ask specific questions instead of reading the whole documentation. Asking those with relevant knowledge of the field is usually not an option. Asking GPT is an option we now have. Why would you not like it? It is like having Excel instead of a calculator and paper.
It takes the fun out of programming
You don't learn as well when you have someone/something else do the thinking for you. It's nice to NOT have to keep going back to an LLM for answers.
I learn even less if the effort required is far too high to even try. GPT reduces this a whole lot, enabling me (and presumably many others) to do things they were unable to do previously.
I really do not understand how this community is so toxic regarding this.
I'm guessing it's because you're surrounded by people who DID spend the extra effort to learn something on their own without having their hand held, and now just see people trying to take the easy way out.
You're not unique. We were all in your position once.
Define "without having their hand held". Did they come up with all concepts themselves? Do they exclusively code in assembly? Wire their machines by hand? Operate the switches manually? Push the button off the Morse machine themselves? How far back should I go with the analogies before it is clear how nonsensical that is? I am a random hobbyist that is enabled to do such stuff because of GPT. I would not have been able to replace a broken BMS chip in my e-bike battery without GPT helping me digest the datasheet and get the register, programming procedure etc. etc. into code to read the old and write the new chip. I am not 15 anymore, I can not spend 50 hours learning some niche skill that I will never(!) use again just to fix something that is worth 200 $.
If you think that anyone can do that with GPT you are not only mistaken but at the same time I am shocked that you would not want that to be the case, just out of pettiness that you could not do it as easy but "had to learn it the hard way back in the day". Disgusting.
I don't care what you do, you do you. I just like actually knowing things when I need to know them, and have the capacity to solve problems myself without being dependent on tech for everything. It's like being able to figure out how to change your own engine oil vs. paying somebody to do it for you.
We read books. We went to classes. We got our hands dirty and failed, again and again and again until it clicked and we got it right. That's the part that's hard. LLMs are a tool. Not a replacement for a good programmer who understands what they are doing. Use them to help you save time with tasks you are already familiar with. Don't use them as a college professor. Because eventually it's going to teach you wrong, that's how they work. And without knowing some basic concepts about the subject you're inquiring about, you're not going to catch it when it does go wrong.
I'm 42 by the way, and I still learn new things every day.
I'm going to bring up an excerpt of your previous comment, because this is an example I want to make. Say there is something in that datasheet (I'm completely making this up as an example) about needing a certain value resistor to set the charging current, and ChatGPT fails to mention this and simply tells you that the battery takes the voltage directly from the circuit without it? Then you have a fire on your hands, because you decided to NOT to read the datasheet and skip crucial info. If you keep taking AI generated text at face value, it's going to bite you in the ass one day.
Electronics is my main hobby, so you can bet I'm poring over datasheets all day too, and little gotchas like that are all over the place. You simply cannot trust them with these things the way you can trust a good old book or someone that's been doing it for a long time.
The first 2 paragraphs read a bit odd. I mean I specifically said that it is a tool that saves time and not what you put in my mouth. That is actually the whole point I made. The same way a book saves time compared to going somewhere, hearing about it and writing it down. Or using interactive programs instead of having to compile and upload code. Or using Python instead of C++ or C++ instead of assembly. Or assembly instead of straight binary or connecting wires or a punch card.
I also specifically say that someone without prior knowledge is not going to be able to do that. The same way someone who does not understand math is not going to be able to use a calculator or Excel in an effective way.
To take the oil change example, it is like a tutorial on how to do it yourself. But you still need to have a jack, lay on the floor, unscrew etc. But instead of having to go to a shop and learn it there, you learn it directly, which is more effective. Like reading a book about assembly instead of looking over the shoulder of the person inventing assembly. Errors can always happen and I have to say, given how much GPT improved over just 1.5 years, we are soon in the situation Wikipedia was back in the day. "Wikipedia can be edited by everyone, you can't trust it" while in reality it was already more reliable than the encyclopedias it was getting compared to.
~20 years ago:
"Reading documentation is for wimps! Real programmers read the source code directly"
LLMs are just a tool. And meanwhile our needs and expectations from the simplest pieces of code have risen
As a sidenote. This reminds me of a discussion I haver every so often on "tools that make things to easy".
There is something I call "the arduino effect:. People who write code for things, based on example-code they find left and right, and all kind of libraries they mix together. It all works .. for as long as it works. The problem is what happens if things do not work.
I once helped out somebody who had an issue with a simple project: he: "I don't understand it. I have this sensor, and this library.. and it works. Then I have this 433 MHz radio-module with that library and that also works. But when I use them together. It doesn't work"| me: what have you tried? he: well, looked at the libraries. They all are all. Reinstalled all the software. It's that neither me: could it be that these two boards use the same hardware interrupt or the same timer he: the what ???
I see simular issues with other platforms. GNU Radio is a another nice example. People mix blocks without knowing what exactly they do.
As said, this is all very nice, as long as it works
I wonder if programming-code generated by LLMs will not result in the same kind of problems. people who do not have the background knowledge needed to troubleshoot issues once problems become more complex.
(Just a thought / question .. not an assumpion)
That can become an issue but IMO the person in your example used the tool wrong. To use it to write the boilerplate for you, MVP, see how the libraries should be used sets one on the track. But that track should be used to start messing with it and understand why what goes where. LLM for code used as replacement is misuse. Used as time booster is good. Unless you completely don't want to learn it, just have something that works. But that assumption broke in your example the moment they decided to add something to it
I have a very "on hands" way of learning things. I had in the past situations when I read whole documentation for a library back to back but in the end I had to copy something that somehow works and keep breaking it and fixing it to understand how it works. The part between documentation to MVP wasn't easier because I've read the documentation
For such kinds of learning, having an LLM create something that works is a great speed up. In theory a tutorial might help in such cases. But it has to exist and very often I want something like this but... can mean that one is exploring direction that won't address their use-case
EDIT: A thought experiment. If I go to fiverr asking for a project, then for another one, and then start smashing them together the problem is not in what the freelancers did. It's in me not knowing what I'm doing. But if I can have a 100 line boilerplate file that only needs a little tinkering generated from a few sentences of text, that's a great speed up
Hi,
Just to put things into perspective.
Well, this example dates from some years ago, before LLMs and ChatGPT. But I agree that the principle is the same. (an that was exactly my point).
If you analyse this. The error the person made was that he assumed an arduino to be like a PC, .. while it is not. An arduino is a microcontroller. The difference is that a microcontroller has resources that are limited: pins, hardware interrups, timers, .. An addition, pins can be reconfigured for different functions (GPIO, UART, SPI, I2C, PWM, ...) Also, a microcontroller of the arduino-class does not run a RTOS, so is coded in "baremetal". And as there is no operating-system that does resource-management for you, you have to do it the application.
And that was the problem: Although resource-management is responsability of the application-programmer, the arduino environment has largly pushed that off the libraries. The libraries configure the ports in the correct mode, set up timers and interrupts, configure I/O devices, ...And in the end, this is where things went wrong. So, in essence, what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)
That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.
I am not saying LLMs are by definition bad. I am just careful about the assumptions they can create.
I know where you're coming from. And I'm not saying you're wrong. But just a thought: what do you think will prevail? Having many people bash together pieces and call in someone who understands the matter only about things that don't. Or having more people understand the real depths?
I'm afraid that in cases where the point is not to become the expert, first one will be chosen as viable tactic
Long time ago we were putting things together manually crafting assembly code. Now we use high level languages to churn out the code faster and solve un-optimalities throwing more hardware at the problem until optimizations come in in interpreter/compiler. We're already choosing the first one
To be honest, I have no personal experience with LLM (kind of boring, if you ask me). I know do have two collegues at work who tried them. One -who has very basic coding skills (dixit himself) - is very happy. The other -who has much more coding experience- says that his test show they are only good at very basic problems. Once things become more complex, they fail very quickly.
I just fear that, the result could be that -if LLMs can be used to provide same code of any project- open-source project will spend even less time writing documentation ("the boring work")
The LLM is excellent at writing documentation... :D
tkinter is pretty powerful but not exactly easy to use. I'd use something simpler to get started.
Hence GPT to help. I build a fairly big GUI that way, far bigger than GPTs context window (about 3'500 lines), but as always we can break things into smaller pieces that are easy to manage.