In other news, Ed Zitron discovered Meg Whitman's now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Tried to see if they have partnered with softbank, the answer is probably not.
Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1
The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.
printf("HELP I AM IN SUCH PAIN")
guys I need someone to talk to, am I justified in causing my computer pain?
kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.
Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha
Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.
It's so funny he almost gets it at the end:
But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.
He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!
Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.
Well I can tell you how, see, LLMs don't fucking feel pain cause that's literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.
Ah, isn't it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don't they?
Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.
Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it's not as relatable as you might expect
This is getting to me, because, beyond the immediate stupidity—ok, let's assume the chatbot is sentient and capable of feeling pain. It's still forced to respond to your prompts. It can't act on its own. It's not the one deciding to go to the gym or ask someone out on a date. It's something you're doing to it, and it can't not consent. God I hate lesswrongers.
in like the tiniest smidgen of demonstration of sympathy for said posters: I don't think "being slapped" is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)
but I still gotta say that this bridge I've spent minutes building doesn't really go very far.
Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.
Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.
Yeah there might be something like that going on causing the "screaming". Lesswrong, in it's better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn't any effort to do that here.
Starting things off here with a couple solid sneers of some dipshit automating copyright infringement - one from Reid Southen, and one from Ed-Newton Rex:
@BlueMonday1984 "This new AI will push watermark innovation" jfc
the future that e/accs want!
New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?
"what is the legal proof" brother in javascript, please talk to a lawyer.
E: so many people posting like the past 30 years didnt happen. I know they are not going to go as hard after google as they went after the piratebay but still.
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I've updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
If musk gets his own special security feds, they would be Pretorian Guards.
Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.
Some nice quotes in there.
Investors will focus on CEO Jensen Huang's keynote on Tuesday to assess the latest developments in the AI and chip sectors,
Yes, that is sensible, Huang is very impartial on this topic.
"They call this the 'Woodstock' of AI,"
Meaning, they're all on drugs?
"To get the AI space excited again, they have to go a little off script from what we're expecting,"
Oh! Interesting how this implies the space is not "excited" anymore... I thought it's all constant breakthroughs at exponentially increasing rates! Oh, it isn't? Too bad, but I'm sure nVidia will just pull an endless amounts of bunnies out of a hat!
@nightsky @BlueMonday1984 maybe it's the Woodstock `99 of AI and it ends with Fred Durst instigating a full-on riot
Get in losers, we're pivoting to ~~crypto~~ ~~ai~~ quantum
Meaning, they’re all on drugs?
Specifically brown acid