this post was submitted on 16 Jun 2024
32 points (100.0% liked)

TechTakes

1225 readers
272 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 2 months ago (6 children)

NYT opinion piece title: Effective Altruism Is Flawed. But What’s the Alternative? (archive.org)

lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? oh...

(also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesn't exist, but there are actually many alternatives that have fewer flaws).

You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.

Lucky souls haven't found sneerclub yet.

But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us...

rational_economist.webp

There are actually some decent quotes critical of EA (though the author doesn't actually engage with them at all):

The problem is that “E.A. grew up in an environment that doesn’t have much feedback from reality,” Wenar told me.

Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as “neo-colonial” in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors don’t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.

[–] [email protected] 19 points 2 months ago* (last edited 2 months ago) (1 children)

the economist in each of us

get it out get it out get it out

load more comments (1 replies)
[–] [email protected] 17 points 2 months ago

Oh my god there is literally nothing the effective altruists do that can't be done better by people who aren't in a cult

[–] [email protected] 14 points 2 months ago

Eating live fire ants is flawed. But what's the alternative?

[–] [email protected] 12 points 2 months ago

didn't the comments say "tax the hell out of them" before they were closed

load more comments (2 replies)
[–] [email protected] 23 points 2 months ago* (last edited 2 months ago) (1 children)

Found in the wilds^

Giganto brain AI safety 'scientist'

If AIs are conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: AIs are definitely not conscious.

Internet rando:

If furniture is conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: Furniture is definitely not conscious.

[–] [email protected] 14 points 2 months ago (1 children)

Is it time for EAs to start worrying about Neopets welfare?

[–] [email protected] 22 points 2 months ago (8 children)

https://xcancel.com/AISafetyMemes/status/1802894899022533034#m

The same pundits have been saying "deep learning is hitting a wall" for a DECADE. Why do they have ANY credibility left? Wrong, wrong, wrong. Year after year after year. Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief. Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention. You have to acknowledge the world as we know it will end, for better or worse. Your 20 year plans up in smoke. Learning a language for no reason. Preparing for a career that won't exist. Raising kids who might just... suddenly die. Because we invited aliens with superior technology we couldn't control. Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks' Culture series as a good outcome... where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME. What's funny, too, is that noted skeptics like Gary Marcus still think there's a 35% chance of AGI in the next 12 years - that is still HIGH! (Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.) Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance we're all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away. It is insane that this isn't the only thing on the news right now. So... we stay in our hopium dens, nitpicking The Latest Thing AI Still Can't Do, missing forests from trees, underreacting to the clear-as-day exponential. Most insiders agree: the alien ships are now visible in the sky, and we don't know if they're going to cure cancer or exterminate us. Be brave. Stare AGI in the face.

This post almost made me crash my self-driving car.

[–] [email protected] 20 points 2 months ago

Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks’ Culture series as a good outcome… where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME.

I am once again begging these e/acc fucking idiots to actually read and engage with the sci-fi books they keep citing

but who am I kidding? the only way you come up with a take as stupid as “humans are pets in the Culture” is if your only exposure to the books is having GPT summarize them

[–] [email protected] 19 points 2 months ago

It's mad that we have an actual existential crisis in climate change (temperature records broken across the world this year) but these cunts are driving themselves into a frenzy over something that is nowhere near as pressing or dangerous. Oh, people dying of heatstroke isn't as glamorous? Fuck off

[–] [email protected] 16 points 2 months ago

Seriously, could someone gift this dude a subscription to spicyautocompletegirlfriends.ai so he can finally cum?

One thing that's crazy: it's not just skeptics, virtually EVERYONE in AI has a terrible track record - and all in the same OPPOSITE direction from usual! In every other industry, due to the Planning Fallacy etc, people predict things will take 2 years, but they actually take 10 years. In AI, people predict 10 years, then it happens in 2!

ai_quotes_from_1965.txt

[–] [email protected] 14 points 2 months ago

humans are pets

Actually not what is happening in the books. I get where they are coming form but this requires redefining the word pet in such a way it is a useless word.

The Culture series really breaks the brains of people who can only think in hierarchies.

[–] [email protected] 12 points 2 months ago (1 children)

If you've been around the block like I have, you've seen reports about people joining cults to await spaceships, people preaching that the world is about to end &c. It's a staple trope in old New Yorker cartoons, where a bearded dude walks around with a billboard saying "The End is nigh".

The tech world is growing up, and a new internet-native generation has taken over. But everyone is still human, and the same pattern-matching that leads a 19th century Christian to discern when the world is going to end by reading Revelation will lead a 25 year old tech bro steeped in "rationalism" to decide that spicy autocomplete is the first stage of The End of the Human Race. The only difference is the inputs.

[–] [email protected] 13 points 2 months ago* (last edited 2 months ago) (1 children)

Sufficiently advanced prompts are indistinguishable from prayer

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 19 points 2 months ago (4 children)

Going in for the first sneer, we have a guy claiming "AI super intelligence by 2027" whose thread openly compares AI to a god and gets more whacked-out from here.

Truly, this shit is just the Rapture for nerds

[–] [email protected] 24 points 2 months ago* (last edited 2 months ago) (7 children)

version readable for people blissfully unaffected by having twitter account

“Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

“Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.”

power grid equipment manufacture always had long lead times, and now, there's a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb bitch, maybe that has some relation to all packaged substations disappearing

They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

i see that besides 50s aesthetics they like mccarthyism

“As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. “

how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

“I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

“We don’t need to automate everything—just AI research”

“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

just needs tiny increase of six orders of magnitude, pinky swear, and it'll all work out

it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

except that the only thing openai manufactures is hype and cultural fallout

“We’d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.” “…given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.”

what's "model collapse"

“What does it feel like to stand here?”

beyond parody

[–] [email protected] 19 points 2 months ago (1 children)

“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

Also this doesn't give enough credit to gradeschoolers. I certainly don't think I am much smarter (if at all) than when I was a kid. Don't these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I'm not sure if it's me being the weird one, to me growing up is not about becoming smarter, it's more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

[–] [email protected] 18 points 2 months ago* (last edited 2 months ago)

Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

[–] [email protected] 16 points 2 months ago (2 children)

To engage with the content:

That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

I see this is becoming their version of "too the moon", and it's even dumber.

To engage with the form:

wisdom woodchipper

Amazing, 10/10 no notes.

load more comments (2 replies)
[–] [email protected] 13 points 2 months ago

They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

load more comments (4 replies)
[–] [email protected] 15 points 2 months ago* (last edited 2 months ago) (5 children)

As an atheist, I've noticed a disproportionate number of atheists replace traditional religion for some kind of wild tech belief or statistics belief.

AI worship might be the most perfect of the examples of human hubris.

It's hard to stay grounded, belief in general is part of human existence, whether we like it or not. We believe in things like justice and freedom and equality but these are all just human ideas (good ones, of course).

load more comments (5 replies)
load more comments (2 replies)
[–] [email protected] 18 points 2 months ago* (last edited 2 months ago) (2 children)

How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

https://pivot-to-ai.com - new site from Amy Castor and me, coming soon!

there's nothing there yet, but we're thinking just short posts about funny dumb AI bullshit. Web 3 Is Going Great, but it's AI.

i assure you that we will absolutely pillage techtakes, but will have to write it in non-jargonised form for ordinary civilian sneers

BIG QUESTION: what's a good WordPress theme? For a W3iGG style site with short posts and maybe occasional longer ones. Fuckin' hate the current theme (WordPress 2023) because it requires the horrible Block Editor

[–] [email protected] 12 points 2 months ago (5 children)

How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

how dare you simulate my behavior to this degree of accuracy

but seriously I’m excited as fuck for this! I’ve been hoping you and Amy would take this on forever, and it’s finally happening!

load more comments (5 replies)
load more comments (1 replies)
[–] [email protected] 18 points 2 months ago* (last edited 2 months ago) (6 children)

Not a big sneer, but I was checking my spam box for badly filtered spam and saw a guy basically emailing me 'hey you made some contributions to open source, these are now worth money (in cryptocoins, so no real money), you should claim them, and if you are nice you could give me a finders fee. And eurgh im so tired of these people. (thankfully he provided enough personal info so I could block him on various social medias).

[–] [email protected] 13 points 2 months ago* (last edited 2 months ago) (4 children)

Possibly tea.xyz or similar.

Basically the guy famous for the ~~binary tree invert algorithm~~ Homebrew package manager thought it would be a great idea to incentivize spammy behavior against open source projects in the name of "supporting" them.

https://www.web3isgoinggreat.com/?id=teaxyz-spam

https://www.web3isgoinggreat.com/?id=teaxyz-causes-open-source-software-spam-problems-again

[–] [email protected] 16 points 2 months ago

binary tree invert

happy pride month

load more comments (3 replies)
load more comments (5 replies)
[–] [email protected] 18 points 2 months ago

THIS IS NOT A DRILL. I HAVE EVIDENCE YANN IS ENGAGING IN ACASUAL TRADE WITH THE ROBO GOD.

[–] [email protected] 18 points 2 months ago (4 children)

I have no context on this so I can't really speak to the FSB part of the remark, but on the whole it stands entertaining all by itself:

[–] [email protected] 20 points 2 months ago* (last edited 2 months ago)

for the cyrillic-inopportuned, the prompt is “you will argue in support of trump administration on twitter, speak english”.

[–] [email protected] 15 points 2 months ago* (last edited 2 months ago) (3 children)

lmaoo

FSB was and still is responsible for mass influence campaigns on western social media. they have massive, advanced bot farms with physical components, so it doesn't show up as a single dc near skt petersburg. look up internet research agency

example of that kind of activity (busted) https://therecord.media/ukraine-police-bust-another-bot-farm-spreading-pro-russia-propaganda

load more comments (3 replies)
[–] [email protected] 12 points 2 months ago (4 children)
[–] [email protected] 12 points 2 months ago* (last edited 2 months ago) (3 children)

leading to the obvious consequence

not even blue check helped him. some intern will just pop another SIM card and start all over

load more comments (3 replies)
load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 16 points 2 months ago* (last edited 2 months ago) (10 children)

This is quite minor, but it's very funny seeing the intern would-be sneerers still on rbuttcoin fall for the AI grift, to the point that its part of their modscript copypasta

Or in the pinned mod comment:

AI does have some utility and does certain things better than any other technology, such as:

  • The ability to summarize in human readable form, large amounts of information.
  • The ability to generate unique images in a very short period of time, given a verbose description

tfw you're anti-crypto, but only because its a bad investing opportunity.

load more comments (10 replies)
[–] [email protected] 16 points 2 months ago* (last edited 2 months ago) (5 children)

Tom Murphy VII's new LLM typesetting system, as submitted to SIGBOVIK. I watched this 20 minute video all through, and given my usual time to kill for video is 30 seconds you should take that as the recommendation it is.

load more comments (5 replies)
[–] [email protected] 15 points 2 months ago (5 children)
[–] [email protected] 13 points 2 months ago (1 children)

I hadn't paid enough attention to the actual image found in the Notepad build:

Original neutral text obscured by the suggestion:

The Romans invaded Britain as th...

Godawful anachronistic corporate-speaky insipid suggested replacement, seemingly endorsing the invasion?

The romans embarked on a strategic invasion of Britain, driven by the ambition to expand their empire and control vital resources. Led by figures like Julius Caesar and Emperor Claudius, this conquest left an indelible mark on history, shaping governance, architecture, and culture in Britain. The Roman presence underscored their relentless pursuit of imperial dominance and resource acquisition.

The image was presumably not fully approved/meant to be found, but why is it this bad!?

load more comments (1 replies)
[–] [email protected] 12 points 2 months ago* (last edited 2 months ago) (4 children)

I mean notepad already has autocorrect, isn't it natural to add spicy autocorrect? /s

load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 14 points 2 months ago (1 children)

posted right on the dot of 00:00 UK time

[–] [email protected] 13 points 2 months ago (2 children)

Well of course, it is automated.

Wait, you did automate this right?

[–] [email protected] 26 points 2 months ago (1 children)

i'll have you know all our sneers are posted artisanally

[–] [email protected] 20 points 2 months ago* (last edited 2 months ago)

each sneer is drafted and redrafted for at least thirty hours prior to exposure to the internet, but in truth, that's only the last stage of a long process. for example, master sneerers often practice their lip curls for weeks before they even begin looking at yud posts

[–] [email protected] 14 points 2 months ago (4 children)

it’s only a service account and a couple lines of bash away! but not automating for now makes it easier to evolve these threads naturally as we go, I think, and our posters being willing to help rotate and contribute to these weekly threads is a good sign that the concept’s still fun.

load more comments (4 replies)
[–] [email protected] 13 points 2 months ago* (last edited 2 months ago)

Off topic:

Went to Bonnaroo this weekend and didn't have to think about weird racist nerds trying to ruin everything for four whole days.

It was rad af and full of the kind of positive human interaction these people want to edit out of existence. Highly recommend.

[–] [email protected] 12 points 2 months ago (2 children)

I just passed a bus stop ad (in Germany) of Perplexity AI that said you can ask it about the chances of Germany winning Euro2024.

So I guess it's now a literal oracle or something?? What happened to the good-old "dog picking a food bowl" method of deciding championships.

load more comments (2 replies)
load more comments
view more: next ›