SneerClub

989 readers
20 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
201
 
 

Does anyone here know what exactly happened to lesswrong to become so cult-y? I had never seen or heard anything about it for years, back in my day it was seen as that funny website full strange people posting weird shit about utliltarianism, nothing cult-y, just weird. The aritcle on TREACLES and this sub's mentioning of lesswrong made me very curious about how it went from people talking out of their ass for the sheer fun of "thought experiments" to a straight-up doomsday cult?
The one time I read lesswrong was probably in 2008 or so.

202
 
 

you have to read down a bit, but really, I'm apparently still the Satan figure. awesome.

203
204
 
 

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

Here's my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

205
 
 

How far are parents willing to go to give their children the best chance at life?
What do you think would happen if you asked the redheaded couple about race and IQ?

206
 
 

Someone posted this on ssc with a warning about talking to cops, but really just marvel at what's going on here.

Aaronson manages to turn a story where he is briefly arrested for a theft (which he did commit on video!) into paragraphs and paragraphs of indulging in his persecution fantasies.

Zero empathy on display for the people he stole from, the people just doing their jobs, or reflection on the fact that it wasn't a simple little mistake anyone could make but rather... a fairly weird move? Do people usually put change in cups?

207
208
 
 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

209
210
 
 

Bit of a rant but I genuinely hate decision theory. At first it seemed like a useful tool to make the best long term decisions for economics and such then LessWrong, EA, GPI, FHI, MIRI and co needed to take what was essentially a tool and turn it into the biggest philosophical disaster since Rand. I'm thinking about moral uncertainty, wagers, hedging, AGI, priors, bayesianism and all the shit that's grown out of this cesspit of rationalism.

What's funny about all this is that there's no actual way to argue against these people unless you have already been indoctrinated into the cult of Bayes, and even if you manage to get through one of their arguments they'll just pull out some other bullshit principle that they either made up or saw somewhere in a massively obscure book to essentially say 'nuh uh'.

What's more frustrating is that there's now evidence that people make moral judgements using a broadly bayesian approach, which I hope just stays in the descriptive realm.

But yeah, I hate decision theory, that is all.

211
212
213
214
215
216
217
218
219
 
 

Taleb dunking on IQ “research” at length. Technically a seriouspost I guess.

220
221
 
 

yes really, that’s literally the title of the post. (archive copy, older archive copy) LessWrong goes full Motte.

this was originally a LW front-page post, and was demoted to personal blog when it proved unpopular. it peaked at +10, dropped to -6 and is +17 right now.

but if anyone tries to make out this isn’t a normative rationalist: this guy, Michael “Valentine” Smith, is a cofounder of CFAR (the Center for Applied Rationality), a LessWrong offshoot that started being about how to do rational thinking … and finally admitted it was about “AI Risk”

this post is the Rationalist brain boys, the same guys who did FTX and Effective Altruism, going full IQ-Anon wondering how the market could fail so badly as not to care what weird disaster assholes think. this is the real Basilisk.

when they’re not spending charity money on buying themselves castles, this is what concerns the modern rationalist

several commenters answered “uh, the customers.” and tried to explain the concept of markets to OP, and how corporations like selling stuff to normal people and not just to barely-crypto-fash. they were duly downvoted to -20 by valiant culture warriors who weren’t putting up with that sort of SJW nonsense.

comment by author, who thinks “hard woke” is not only a thing, but a thing that profit-making corporations do so as not to make a profit: “For what it’s worth, I wouldn’t describe myself as leaning right.” lol ok dude

right-wingers really don’t believe in, or even understand, capitalism or markets at all. they believe in hierarchy. that’s what’s offended this dipshit.

now, you might think LessWrong Rationalists, Slate Star Codex readers, etc. tend towards behaving functionally indistinguishably from Nazis, but that’s only because they work so hard at learning from their neoreactionary comrades to reach that stage

why say in 10,000 words what you can say in 14

222
17
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Video games also have potential legal advantages over IQ tests for companies. You could argue that "we only hire people good at video games to get people who fit our corporate culture of liking video games" but that argument doesn't work as well for IQ tests.

yet again an original post title that self-sneers

223
 
 

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

224
225
11
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

it got good reviews on the discord!

view more: ‹ prev next ›