OT: just got a job interview and wanted to pass the good vibes on!
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Girls think the "eu" in "eugenics" means EW. Don't get the ick, girls! It literally means good.
So if you're not into eugenics, that means you must be into dysgenics. Dissing your own genes! OMG girl what
... how is this man still able to post from inside the locker he should be stuffed in 24/7
Seeing Yarvin mansplain eugenics really does make one wonder how he doesn't just get suckerpunched whenever he says anything at someone in public.
Not beating the sexism allegations.
sounds like he's posting from inside a dilapidated white panel van parked strategically just outside a legally-mandated exclusion radius surrounding a middle school
The eigenrobot thread he's responding to is characteristically bizarre and gross. You'd think eigenrobot being anti-eugenics is a good thing but he still finds a way to make it suspect. (He believes being unable to make babies is worse than death?)
A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren't on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest... i.e. moving through completely impassable tiles).
I don't know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner... well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren't showing the models' raw capabilities (which are even worse than I complained about) or comparing them meaningfully.
I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And that's the only part of the whole thing that actually works.
Im sure this is fine https://infosec.exchange/@paco/114509218709929701
"Paco Hope #resist @[email protected]
OMG. #Microsoft #Copilot bypasses #Sharepoint #security so you don’t have to!
“CoPilot gets privileged access to SharePoint so it can index documents, but unlike the regular search feature, it doesn’t know about or respect any of the access controls you might have set up. You can get CoPilot to just dump out the contents of sensitive documents that it can see, with the bonus feature* that your access won’t show up in audit logs.”
The S in CoPilot stands for Security! https://pivotnine.com/the-crux/archive/remembering-f00fs-of-old/"
New piece from Iris Meredith: Keeping up appearances, about the cultural forces that gave us LLMs and how best to defeat them
Reminds me something F.D. Signifier said on a music podcast.
Progressives are losing the cultural war in a lot of ways, but they'll always need us because we're the ones pushing the boundaries on art, and it turns out, no matter how ghoulish people want to act, everyone has genuine love of fucking awesome art. The true loss condition is being captured by the tools of the master.
Rekindled a desire to maybe try my own blog ^^.
I think beyond "Keeping up appearances" it's also the stereotype of fascists—and by extension LLM lovers—having trouble (or pretending to) distinguishing signifying and signified.
I was trying out free github copilot to see what the buzz is all about:
It doesn't even know its own settings. This one little useful thing that isn't plagiarism, providing natural language interface to its own bloody settings, it couldn't do.
I don't get it, how is every one of the most touted people in the AI space among the least credible people in the industry.
Like literally every time its a person whose name I recognize from something else they've done, that something else is something I hate.
In the collection of links of what Ive has done in recent years, there's one to an article about a turntable redesign he worked on, and from that article:
The Sondek LP12 has always been entirely retrofittable and Linn has released 50 modular hardware upgrades to the machine, something that Ive said he appreciates. "I love the idea that after years of ownership you can enjoy a product that's actually better than the one you bought years before," said Ive.
I don't know, should I laugh, or should I scream, that it's Ive, of all people, saying that.
Some quality sneers in Extropic's latest presentation about their thermodynamics hardware. My favorite part was the Founder's mission slide "e/acc maximizes the watts per civilization while Extropic maximizes intelligence per watt".
I'm not going to watch more than a few seconds but I enjoyed how awkward Beff Jezos is coming across.
Tante has a couple of questions for Anthropic:
Another critihype article from the BBC, with far too much credulousness at the idea behind supposed AI consciousness at the cost of covering the harms of AI as things stand, e.g. the privacy, environmental, data set bias problems:
Tried to read it, ended up glazing over after the first or second paragraph, so I'll fire off a hot take and call it a day:
Artificial intelligence is a pseudoscience, and it should be treated as such.
Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)... and I've come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.