blakestacey

joined 1 year ago
MODERATOR OF
[–] [email protected] 7 points 1 week ago

WP:YEAHOK, WP:IWILLALLOWIT

[–] [email protected] 10 points 1 week ago

So if it turns out, as people like Penrose assert, that the brain has a certain quantum je-ne-sais-quoi, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.

No, that's not what Penrose asserts. His whole thing has been to say that quantum mechanics needs to be changed, that quantum mechanics is wrong in a way that matters for understanding brains.

[–] [email protected] 6 points 2 weeks ago

the dead mall of ideas

[–] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Oh look, an AI tool to make Wikipedia worse.

(Apparently, the Wikimedia Foundation couldn't even be bothered to care about the standards that en.wp contributors deem necessary for sources on medical topics. Because it's more important to "sustain and grow Wikimedia projects in a changing online knowledge landscape". Dammit, where's the button that sends electrical shocks through the Internet to anyone who talks like that?)

[–] [email protected] 8 points 2 weeks ago (3 children)

Idea: a Pivot to AI video series hosted by an avatar that's, like, a talking polyhedron in the style of Mind's Eye/Body Wars era CGI.

This would require effort and thus is a terrible idea, but I find the mental image amusing.

[–] [email protected] 11 points 2 weeks ago (1 children)

ux/acc is the noise you make after eating something that you really shouldn't have eaten

[–] [email protected] 10 points 2 weeks ago (3 children)

"Ideological Turing test"?

Not gonna look up what that is, but I'm sure it's debatebro "civility" fetishism.

[–] [email protected] 11 points 2 weeks ago (1 children)

Wait, I totally had something for this...

"Bene Jizzerit"

[–] [email protected] 13 points 2 weeks ago

Surely the words being put into the son's mouth should be "What's his TikTok?".

[–] [email protected] 14 points 2 weeks ago (3 children)

Regarding the paper "Discovery of a structural class of antibiotics with explainable deep learning", a comment by e. e. arroyo:

"Yes! we can make explainable predictions of compounds with AI now!"

  • They could only explain 15% of predictions
  • They could only understand 9% of those explanations
  • 60% of those were wrong
  • The 4 active compounds left are probably, arguably, not even a novel class
 

If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

In Surfaces and Interfaces, online 17 February 2024:

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

In Radiology Case Reports, online 8 March 2024:

In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

Edit to add this erratum:

The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

Edit again to add this article in Urban Climate:

The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

And this one in Energy:

Certainly, here are some potential areas for future research that could be explored.

Can't forget this one in TrAC Trends in Analytical Chemistry:

Certainly, here are some key research gaps in the current field of MNPs research

Or this one in Trends in Food Science & Technology:

Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

And we mustn't ignore this item in Waste Management Bulletin:

When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

Certainly, here's the text without bullet points:

 

In which a man disappearing up his own asshole somehow fails to be interesting.

 

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

 

The day just isn't complete without a tiresome retread of freeze peach rhetorical tropes. Oh, it's "important to engage with and understand" white supremacy. That's why we need to boost the voices of white supremacists! And give them money!

 

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).

6
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

 

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

 

Emily M. Bender on the difference between academic research and bad fanfiction

view more: ‹ prev next ›