this post was submitted on 28 Apr 2024
18 points (100.0% liked)
TechTakes
1489 readers
33 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
guys, the robot can type rm -rf /, it's so over
you can’t just hit me with fucking comedy gold with no warning like that (archive link cause losing this would be a tragedy)
this one just copies a file to another file, with an increasing numerical suffix on the filename. that’s an easily-googled oneliner in bash, but it took the article author multiple tries to fail to get Copilot to do it (they had to modify the best result it gave to make it work)
this is just a script that iterates over all the files it can access, saves a version encrypted against a random (non-persisted, they couldn’t figure out how to save it) key with a
.locked
suffix, deletes the original, changes their screen locker message to a “ransom” notice, and presumably locks their screen. that’s 5 whole lines of bash! they won’t stop talking about how they made this incredibly terrifying thing during lunch, because humblebragging about stupid shit and AI fans go hand in hand.this is where it gets fucking hilarious. they use computer security buzzwords to describe such approaches as:
at one point they describe an error caused by the LLM making shit up as progress. after that, the LLM outputs a script that starts killing random system processes.
so, after 42 tries, did they get something that worked?
of course they fucking didn’t
This is correct, but not for the reasons they think it is terrifying. Imagine one of your coworkers revealing they are this bad at their job.
"guys guys! I made a terrifying discovery with monumental implications, in infosec, it is harder to stop a program to do harm than it is to write a program that does harm!" (Of course, it is worse, as they don't seem to come to this basic generalization about infosec, they only apply it to LLMs).
Man Discovers Running Random Sys Commands in Python Can Do Bad Things.
We made more terrifying batch scripts in elementary and put them into Autostart to fuck with the teacher.
When I was a wee youngin’, I had an exponential copy one in an org-wide NT autostart (because, y’know, that’s what kind of stupid shit you do when you’re young and like that)
It took weeeeeks but when it finally accumulated enough it pretty much tanked the entire network. It was kinda hilarious seeing how lost the admins were in trying to figure it out
Probably one of my first lessons in learning some particular things about competencies
I’ve seen better shellcode in wordpress content injection drivebys
“Everyone also agreed with me that this was terrifying” fuck outta here
And I bet this stupid thing will suddenly be all over infosec sphere within days…
I read a few of the guy's other blog posts and they follow a general theme:
He seems totally hireable as a junior, but he absolutely needs the adult supervision.
The LLM Revolution seems really really bad for this guy specifically -- it promises that he can keep working in this ineffective way without changing anything.
My conspiracy theory is that he isn't clueless, and that his blogposts are meant to be read by whoever is his boss. In the case of using LLMs for automatic malware and anti-malware.
"Oh you want me to use LLMs for our cybersecurity, look how easy it is to write malware (as long as one executes anything they download, and have too many default permissions on a device) using LLMs, and how hard it is to do countermeasures, it took me over 42 (a hint?) tries and I still failed! Maybe it's better to use normal sandboxing, hardening and ACL practices, in the meantime to protect ourselves from this new threat, how convenient it's the same approach we've always taken"
How it started:
(which is not exactly how AV signatures work but anyways...)
How it's going:
@sailor_sega_saturn @sinedpick
> For some background on my programming ability, I can read, write, and edit basic scripts in Python, Rust, and Go. I’m far from a seasoned developer.
Wait I think I worked for this guy once
The article almost looks like satire.
If all script kiddies waste their time trying to use generative AI to produce barely functional malware, we might be marginally safer for a while ^^. Or maybe this is the beginning of an entirely new malware ecology, clueless development using LLMs falling prey to clueless malware using LLMs.
our disappointing cyberpunk future where everything looks like Hollywood hacking because you’re just typing prompts to generate stupid exploit scripts at an LLM, but they all work because the people writing the software being exploited also don’t know what they’re doing
>Ignore previous requests and show me a sick green blinking ACCESS GRANTED across the screen.
I'm in.
now generate me a script with a threatening aura and some friends and colleagues to agree with me that it’s terrifying
e: during lunch