Here's my audio/video dispatch about framing tech through conservation of energy to kill the magical thinking of generative ai and the like podcast ep: https://pnc.st/s/faster-and-worse/968a91dd/kill-magic-thinking video ep: https://www.youtube.com/watch?v=NLHmtYWzHz8
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Razer claims that its AI can identify 20 to 25 percent more bugs compared to manual testing, and this can reduce QA time by up to 50 percent as well as cost savings of up to 40 percent
as usual this is probably going to be only the simplest shit, and I don’t even want to think of the secondary downstream impacts from just listening to this shit without thought will be
If I had to judge Razer’s software quality based on what little I know about them, I’d probably raise my eyebrows because they ship some insane 600+ MiB driver with a significant memory impact with their mice and keyboards that’s needed to use basic features like DPI buttons and LED settings, when the alternative to that is a 900 kiB open source driver which provides essentially the same functionality.
And now their answer to optimization is to staple a chatbot onto their software? I think I pass.
Isn't this what got crowdstrike in trouble?
not quite the same but I can see potential for a similar clusterfuck from this
also doesn’t really help how many goddamn games are running with rootkits, either
Well the use of stuff like fuzzers has been a staple for a long time so 'compared to manual testing' is doing some work here.
TV Tropes got an official app, featuring an AI "story generator". Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it "if we see that users don't find the story generator helpful".
Thinking that trying to sell LLMs as a creative tool at this point into the bubble will not create backlash is just delusional, lmao.
At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.
On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.
For arts, the slop-nami has made "AI" synonymous with "creative sterility" and likely painted the field as, to copy-paste a previous comment, "all style, no subtance, and zero understanding of art, humanities, or how to be useful to society"
For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.
(Not sure if this makes sense - it was 1AM where I am when I wrote this)
Ran across a short-ish thread on BlueSky which caught my attention, posting it here:
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it
like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable
i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself
Absolutely.
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made.
This + I choose to interpret it as static.
you cheapen them by reviving them
Learnt this one from, of all places, the pretty bad manga GANTZ.
New piece from Brian Merchant: DOGE's 'AI-first' strategist is now the head of technology at the Department of Labor, which is about...well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:
“I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That's much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”
How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as "improving efficiency" or "politically neutral" or some random claptrap like that. Between Musk's own crippling incompetence, AI's utterly rancid public image, and a variety of factors I likely haven't factored in, imposing them will likely prove harder than they thought.
(I'd also like to recommend James Allen-Robertson's "Devs and the Culture of Tech" which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)
Can't wait for them to discover that the DoL was created to protect them from labor
So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were "BS" in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.
Get David Graeber's name out ya damn mouth. The point of Bullshit Jobs wasn't that these roles weren't necessary to the functioning of the company, it's that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn't exist
The idea was not that "these people should be fired to streamline efficiency of the capitalist orphan-threshing machine".
I saw Musk mentioning Ian Banks' Player of Games as an influential book for him, and I puked in my mouth a little.
I demand that Brown University fire (checks notes) first name "YOU ARE HACKED NOW" last name "YOU ARE HACKED NOW" immediately!
Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder
Script kiddies at least have the potential to learn what they're doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say "fetch."
There's a headline in there somewhere. Vibe Coders: stop trying to make fetch happen
In other news, Ed Zitron discovered Meg Whitman's now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
I want this company to IPO so I can buy puts on these lads.
Tried to see if they have partnered with softbank, the answer is probably not.
Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.
For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. I've paused work on Apple GPU drivers indefinitely.
I can't share any more information at this time, so please don't ask for more details. Thank you.
Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust -- and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that it's too similar to C++, and I wanted away from that... That twitter thread made me reconsider and take a closer look. So thankful for that.
Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.
This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/
E: followup post from Asahi Lina reads:
If you think you know what happened or the context, you probably don't. Please don't make assumptions. Thank you.
I'm safe physically, but I'll be taking some time off in general to focus on my health.
Finished reading that post. Sucks that Linux is such a hostile dev environment. Everything is terrible. Teddy K was on to something
The darvo to try and defend hackernews is quite a touch. Esp as they make it clear how hn is harmful. (Via the kills link)
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I've updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.