this post was submitted on 16 Sep 2024
26 points (100.0% liked)

TechTakes

1276 readers
135 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 hour ago* (last edited 50 minutes ago)

I signed up for the Urbit newsletter many moons ago when I was a little internet child. Now, it's a pretty decent source of sneers. This month's contains: "The First Wartime Address with Curtis Yarvin". In classic Moldbug fashion, it's Two Hours and Forty Fucking Five minutes long. I'm not going to watch the whole thing, but I'll try to mine the transcript for sneers.

26:23 --

Simplicity in them you know it runs on a virtual machine who specification Nock [which] fits on a T-shirt and uh you know the goal of the system is to basically take this kind of fundamental mathematical simplicity of Nock and maintain that simplicity all the way to user space so we create something that's simple and easy to use that's not a small amount of of work

Holy fucking shit, does this guy really think building your entire software stack on brainfuck makes even a little bit of sense at all?

30:17 -- a diatribe about how social media can only get worse and how Facebook was better than myspace because its original users were at the top of the social hierarchy. Obviously, this bodes well for urbit because all of you spending 3 hours of your valuable time listening to this wartime address? You're the cream of the crop.

~2:00:00 -- here he addresses concerns about his political leanings, caricaturing the concern as "oh Yarvin wants to make this a monarchy" and responding by saying "nuh uh, urbit is decentralized." Absent from all this is any meaningful analysis of how decentralized systems (such as the internet itself) eventually tend to centralized systems under certain incentive structures. Completely devoid of substance.

[–] [email protected] 7 points 2 hours ago* (last edited 2 hours ago) (2 children)

The robots clearly want us dead -- "Delivery Robot Knocked Over Pedestrian, Company Offered ‘Promo Codes’ to Apologize" (404 media) (archive)

And here rationalists warned that AI misalignment would be hidden from us until the "diamonoid bacteria".

[–] [email protected] 4 points 1 hour ago

AI misalignment leads to spinal misalignment.

[–] [email protected] 7 points 2 hours ago

If only we had paid attention to the roomba hitting us in the leg. It wasn't adorable, it was a murder attempt!

[–] [email protected] 13 points 9 hours ago (2 children)

Timnit Gebru on Twitter:

We received feedback from a grant application that included "While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,"

https://xcancel.com/timnitGebru/status/1836492467287507243

[–] [email protected] 7 points 5 hours ago

navigating this space without alignment with larger corporate players

stares into middle distance, hollow laugh

[–] [email protected] 6 points 5 hours ago

No need for xcancel, Gebru is on actually social media: https://dair-community.social/@timnitGebru/113160285088058319

[–] [email protected] 7 points 8 hours ago (2 children)

Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

Sure, "religion" is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the "Religion or not" checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

Citation very much needed, bub.


[1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

[2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

[–] [email protected] 4 points 1 hour ago (1 children)

nobody in the community is actually interested in the Basilisk

except the ones still getting upset over it, but if we deny their existence as hard as possible they won't be there

[–] [email protected] 3 points 37 minutes ago* (last edited 37 minutes ago)

The reference to the Basilisk was literally one sentence and not central to the post at all, but this big-R Rationalist couldn't resist on singling it out and loudly proclaiming it's not relevant anymore. The m'lady doth protest too much.

[–] [email protected] 8 points 8 hours ago* (last edited 8 hours ago)

nobody in the community is actually interested in the Basilisk

But you should, yall created an idea which some people do take seriously and it is causing them mental harm. In fact, Yud took it so seriously in a way that shows that he either beliefs in potential acausal blackmail himself, or that enough people in the community believe it that the idea would cause harm.

A community he created to help people think better. Which now has a mental minefield somewhere but because they want to look sane to outsiders now people don't talk about it. (And also pretend that now mentally exploded people don't exist). This is bad.

I get that we put them in a no-win situation, either take their own ideas seriously enough to talk about acausal blackmail. And then either help people by disproving the idea, or help people by going 'this part of our totally Rational way of thinking is actually toxic and radioactive and you should keep away from it (A bit like Hegel am I right(*))'. Which makes them look a bit silly for taking it seriously (of which you could say who cares?), or a bit openly culty if they go with the secret knowledge route. Or they could pretend it never happened and never was a big deal and isn't a big deal in an attempt to not look silly. Of course, we know what happened, and that it still is causing harm to a small group of (proto)-Rationalists. This option makes them look insecure, potentially dangerous, and weak to social pressure.

That they do the last one, while have also written a lot about acausal trading, which just shows they don't take their own ideas that seriously. Or if it is an open secret to not talk openly about acausal trade due to acausal blackmail it is just more cult signs. You have to reach level 10 before they teach you about lord Xeno type stuff.

Anyway, I assume this is a bit of a problem for all communal worldbuilding projects, eventually somebody introduces a few ideas which have far reaching consequences for the roleplay but which people rather not have included. It gets worse when the non-larping outside then notices you and the first reaction is to pretend larping isn't that important for your group because the incident was a bit embarrassing. Own the lightning bolt tennis ball, it is fine. (**)

*: I actually don't know enough about philosophy to know if this joke is correct, so apologies if Hegel is not hated.

**: I admit, this joke was all a bit forced.

[–] [email protected] 9 points 14 hours ago* (last edited 14 hours ago) (1 children)

Via Timnit Gebru's mastodon, I just learned that Emily Bender (both of On the Dangers of Stochastic Parrots fame) has a podcast: "Mystery AI Hype Theater 3000." Looking forward to checking it out tomorrow at the gym!

https://www.buzzsprout.com/2126417/episodes

Summary: Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.

[–] [email protected] 4 points 8 hours ago

It's pretty great. The closing bits of improv from her cohost are a bit meh imo but the AI hype assessments are legitimately amazing.

[–] [email protected] 10 points 19 hours ago (3 children)
[–] [email protected] 5 points 7 hours ago* (last edited 7 hours ago) (1 children)

From the comments: "Putting my conspiracy theory hat on, the dental hygiene industry in the US is for-profit, like the pharmaceutical, and would rather sell you a treatment than a cure."

Have these people ever BEEN to the dentist? While I know that certain dental procedures (tooth straightening in kids, whitening, etc) are way overused in the US no dentist worth their salt will allow a check-up to go by without a stern lecture on preventing future trouble. And if they don't do that then the hygienist most certainly will...

[–] [email protected] 4 points 7 hours ago

Here in Sweden the hygienist is definitely the Bad Cop in this scenario. I got sternly talked to by someone fresh out of school, so I don't doubt there's a retired Master Sergeant on the staff of the college they go to...

[–] [email protected] 11 points 16 hours ago (3 children)

Sometimes you read an article and you think "this article doesn't want me to do X, but all its arguments against X are utterly terrible. If that's the best they could find, X is probably alright."

that thread is an unholy combination of two of my least favorite types of guys: techbros willfully misunderstanding research they disagree with, and homeopaths

[–] [email protected] 4 points 7 hours ago* (last edited 7 hours ago)

I'd think 'we don't know the side effects, it prob doesn't work, and they are trying to sidestep the FDA' would be good arguments against it. Esp after in the US Thalidomide (yes very much a dead horse), (mostly) wasn't a problem because the FDA stopped it.

Anyway, it seems like the full scale FDA project stranded due to not enough volunteers, so I suggest the HN people mad about this help out. Hey, it might turn out to actually work.

[–] [email protected] 13 points 16 hours ago

this article doesn’t want me to drink a shitload of colloidal silver, but all its arguments against drinking colloidal silver (it doesn’t do anything for your health, it might turn you blue, it tastes like ass) are utterly terrible. If that’s the best they could find, drinking a shitload of colloidal silver is probably alright.

[–] [email protected] 6 points 13 hours ago

What a terrible argument. Anything that involves messing around with your teeth needs to have good reasons to do it, rather than just good arguments against doing it.

[–] [email protected] 8 points 15 hours ago* (last edited 14 hours ago)

Absolutely unhinged. Are these people from the As-Seen-On-TV dimension where it's common for folks burn their house down every time they try to fry an egg?

[–] [email protected] 11 points 22 hours ago (2 children)

Pulling out a pretty solid Tweet @ai_shame showed me:

countersneer

To pull out a point I've been hammering since Baldur Bjarnason talked about AI's public image, I fully anticipate tech's reputation cratering once the AI bubble bursts. Precisely how the public will view the tech industry at large in the aftermath I don't know, but I'd put good money on them being broadly hostile to it.

[–] [email protected] 8 points 22 hours ago (3 children)

If you're against unrestricted genAI then you're also transphobic

What. Wait has anyone claimed this? Because that's absurd.

[–] [email protected] 6 points 10 hours ago (2 children)

Oh, I wonder if they are referring to this shit, where somone came to r/lgbt fishing for compliments for the picture they'd asked Clippy for, and were completely clowned on by the entire community, which then led to another subreddit full of promptfans claiming that artists are transphobic because they didn't like a generated image which had a trans flag in it.

[–] [email protected] 5 points 9 hours ago

remembering the NFT grifter who loudly asserted that if you weren't into NFTs then you must be a transphobe

(it was Fucking Thorne)

[–] [email protected] 5 points 9 hours ago

It warms the cockles of my heart that all across the web people find AI as annoying as I do.

[–] [email protected] 9 points 15 hours ago

Considering how much the AI hype feels like the cryptocurrency hype, during which every joke you made had already been seriously used to make a coin and been pumped and dumped already, I wouldn't be surprised at all.

[–] [email protected] 9 points 21 hours ago

Dunno but why not, after Nanowrimo claimed that opposing "AI" means you're classist and ableist. Why not also make objecting be sexist, racist etc. I'm going to be ahead of the curve by predicting that being against ChatGPT will also be a red flag that you're a narcissistic sociopath manipulator because uhh because abused women need ChatGPT to communicate with their toxic exes /s

load more comments (1 replies)
load more comments
view more: next ›