this post was submitted on 25 May 2025
27 points (100.0% liked)

TechTakes

1880 readers
63 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 hours ago

I hate I'm so terminally online I found out about the rumor that Musk and Stephen Miller's wife are bumping uglies through a horrorfic parody account

https://mastodon.social/@[email protected]/114593332907413196

[–] [email protected] 8 points 18 hours ago

OT: Welp. Think interview went well. Just waiting for them to check references (oh god) and I should know whats what by Monday.

[–] [email protected] 6 points 22 hours ago

Got two major pieces to show which caught my attention:

[–] [email protected] 5 points 1 day ago (1 children)

In an completely unprecedented turn of events, the word prediction machine has a hard time predicting numbers.

https://www.wired.com/story/google-ai-overviews-says-its-still-2024/

[–] [email protected] 2 points 1 day ago

Hey, mentally some people are still in 2019/early 2020. /s

[–] [email protected] 5 points 1 day ago

Further evidence emerging that the effort to replace government employees with the Great Confabulatron are well at hand and the presumed first-order goal of getting a yes-man to sign off on whatever bullshit is going well.

Now we wait for the actual policy implications and the predictable second-order effects. Which is to say dead kids.

[–] [email protected] 9 points 1 day ago* (last edited 1 day ago) (3 children)

Am I the only person not impressed by veo3? Yeah there are more details yada yada, but the details are still wrong.

The view of the garbage fractal isn't improved by zooming deeper into the Bullshit-Mandlebrot Set.

[–] [email protected] 6 points 1 day ago

I saw an ad for a local gin festival generated with veo3 and now I’ve sworn off gin

[–] [email protected] 4 points 1 day ago

Looks like shit and it's mostly entirely static because anything with a little more movement would look like complete piss.

[–] [email protected] 6 points 1 day ago

Holy hell all the examples I found made me seasick. I am apparently physically incapable of watching veo3 videos.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago) (1 children)

currently reading https://arxiv.org/abs/2404.17570

this is PsiQuantum, who are hot prospects to build an actually-quantum computer

no, they have not yet factored 35

but they are seriously planning qubits on a wafer and they think they can make a chip with 1m noisy qubits

anyone know more about this? does that preprint (from last year) pass sniff tests?

(my interest is journalistic, and also the first of these companies to factor 35 gets all the VC money ever)

[–] [email protected] 8 points 1 day ago (1 children)

I unfortunately know jack shit about quantum computing, so I can't really weigh in, but I am rooting for them to pull it off, because PsiQuantum getting all the VC money ever means AI stops getting all the VC money ever

[–] [email protected] 6 points 1 day ago

Hey, if we boil the oceans via quantum research, at least we might get some new physics out of it.

[–] [email protected] 6 points 1 day ago
[–] [email protected] 6 points 2 days ago

New Bluesky post from Baldur Bjarnason:

What’s missing from the now ubiquitous “LLMs are good for code” is that code is a liability. The purpose of software is to accomplish goals with the minimal amount of code that’s realistically possible

LLMs may be good for code, but they seem to be a genuine hazard for collaborative software dev

[–] [email protected] 5 points 2 days ago (1 children)

New artcle from Brian Merchant: An 'always on' OpenAI device is a massive backlash waiting to happen

Giving my personal thoughts on the upcoming OpenAI Device^tm^, I think Merchant's correct to expect mass-scale backlash against the Device^tm^ and public shaming/ostracisation of anyone who decides to use it - especially considering its an explicit repeat of the widely clowned on Humane AI Pin.

headlines of Device^tm^ wearers getting their asses beaten in the street to follow soon afterwards. As Brian's noted, a lot of people would see wearing an OpenAI Device^tm^ as an open show of contempt for others, and between AI's public image becoming utterly fouled by the bubble and Silicon Valley's reputation going into the toilet, I can see someone seeing a Device^tm^ wearer as an opportunity to take their well-justified anger at tech corps out on someone who openly and willingly bootlicks for them.

[–] [email protected] 3 points 15 hours ago (2 children)

Part of me wonders if this is even supposed to be a profitable hardware product or if they're sufficiently hard-up for training data that "put always-on microphones in as many pockets as possible" seems like a good strategy.

It's not, both because it's kinda evil and because it's definitely stupid, but I can see it being used to solve the data problem more quickly than I can see anyone think this is actually a good or useful product to create.

[–] [email protected] 1 points 42 minutes ago* (last edited 40 minutes ago)

When I get a minute, I intend to do a back of the napkin calc to figure out how many words 100 million of these things would hear on an average day.

100 million sounds like a target that was naively pooped out by some other requirement, like "How much training data do we need to scale to GPT-5 before the money runs out, assuming the dumbest interpolation imaginable?"

[–] [email protected] 2 points 11 hours ago (1 children)

What does solving the data problem supposed to look like exactly? A somewhat higher score in their already incredibly suspect benchmarks?

The data part of the whole hyperscaling thing seems predicated on the belief that the map will magically become the territory if only you map hard enough.

[–] [email protected] 3 points 8 hours ago

I fully agree, but as data availability is one of the primary limits that hyperscaling is running up against I can see the true believers looking for additional sources, particularly sources that aren't available to their competitors. Getting a new device in people's pockets with a microphone and an internet link would be one such advantage, and (assuming you believe the hyperscaling bullshit) would let OpenAI rebuild some kind of moat to keep themselves ahead of the competition.

I don't know, though. Especially after the failure of at least 2 extant versions of the AI companion product I just can't imagine anyone honestly believing there's enough of a market for this to justify even the most ludicrously optimistic estimate of the cost of bringing it to market. It's either a data thing or a straight-up con to try and retake the front page for another few news cycles. Even the AI bros can't be dumb enough for it to be a legit effort.

[–] [email protected] 14 points 3 days ago (4 children)

Veering semi-OT: the guy behind the godawful Windows 11 GUI has revealed himself:

Looking at his Twitter profile, its clear he's a general dumpster fire of a human being - most of his feed's just him retweeting AI garbage or fash garbage.

[–] [email protected] 12 points 2 days ago

this one is a joke, i think. he is definitely on the fashy bullshit though

[–] [email protected] 15 points 2 days ago (1 children)

It's not healthy for me to have my biases confirmed like this.

[–] [email protected] 8 points 2 days ago (1 children)

But it lets you adjust your priors so pleasantly!

[–] [email protected] 11 points 2 days ago

It also means you can update your priors about your own ~~biases~~ predictive instincts being good, allowing you to be more confident in literally everything you've ever believed or thought about for half a second. Superpredictors unite!

[–] [email protected] 8 points 2 days ago

Not advocating violence, but Achewood did demonstrate one possible set of reactions to discovering a Microsoft designer at large in public.

https://achewood.com/2007/07/05/title.html

[–] [email protected] 11 points 3 days ago (2 children)

@BlueMonday1984 lol @ "I try not to let [performance] considerations get in the way"
Also why do you even put a React Dev on that task 🤡

[–] [email protected] 5 points 1 day ago

“I try not to let [performance] considerations get in the way

You could show me this without any context whatsoever and my first thought would've been "did a React dev say that"

[–] [email protected] 27 points 3 days ago (1 children)

OT: just got a job interview and wanted to pass the good vibes on!

[–] [email protected] 8 points 3 days ago
[–] [email protected] 10 points 3 days ago

time amplifying the nonsense around saltman’s orb grift

features a helluva lot of words while at multiple points remaining entirely incurious about the claims it amplifies

[–] [email protected] 7 points 3 days ago

Pretty good summary of why Alex Karp is as much a horrible fucking shithead as Thiel.

https://www.thenation.com/article/culture/alex-karp-palantir-tech-republic/tnamp/

[–] [email protected] 14 points 3 days ago* (last edited 3 days ago)

I was trying out free github copilot to see what the buzz is all about:

It doesn't even know its own settings. This one little useful thing that isn't plagiarism, providing natural language interface to its own bloody settings, it couldn't do.

[–] [email protected] 10 points 3 days ago
[–] [email protected] 13 points 3 days ago (3 children)
[–] [email protected] 10 points 3 days ago* (last edited 3 days ago)

I don't get it, how is every one of the most touted people in the AI space among the least credible people in the industry.

Like literally every time its a person whose name I recognize from something else they've done, that something else is something I hate.

[–] [email protected] 9 points 3 days ago

In the collection of links of what Ive has done in recent years, there's one to an article about a turntable redesign he worked on, and from that article:

The Sondek LP12 has always been entirely retrofittable and Linn has released 50 modular hardware upgrades to the machine, something that Ive said he appreciates. "I love the idea that after years of ownership you can enjoy a product that's actually better than the one you bought years before," said Ive.

I don't know, should I laugh, or should I scream, that it's Ive, of all people, saying that.

load more comments (1 replies)
[–] [email protected] 15 points 4 days ago (4 children)

New piece from Iris Meredith: Keeping up appearances, about the cultural forces that gave us LLMs and how best to defeat them

[–] [email protected] 6 points 2 days ago

In a world that chases status, be prestigious

I'll keep that in mind....

[–] [email protected] 11 points 4 days ago

Reminds me something F.D. Signifier said on a music podcast.

Progressives are losing the cultural war in a lot of ways, but they'll always need us because we're the ones pushing the boundaries on art, and it turns out, no matter how ghoulish people want to act, everyone has genuine love of fucking awesome art. The true loss condition is being captured by the tools of the master.

load more comments (2 replies)
load more comments
view more: next ›