this post was submitted on 23 Aug 2024
191 points (100.0% liked)

TechTakes

1436 readers
129 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 82 points 3 months ago (2 children)

Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]

These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!

and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears

[–] [email protected] 42 points 3 months ago (1 children)

it helps they did it to someone with contacts and it was on prime time news telly

[–] [email protected] 40 points 3 months ago (19 children)

god, so this is actually the best the AI researchers can do with the tools they’ve shit out into the world without giving any thought to failure cases or legal liability (beyond their manager on ~~slack~~Teams claiming it’s been taken care of)

so fuck it, let’s make the defamation machine a non-optional component of windows. we’ll just make it a P0 when someone who could actually get us in legal trouble complains! everyone else is a P2 that never gets assigned.

[–] ogmios 14 points 3 months ago (1 children)

so this is actually the best the AI researchers can do

Highly unlikely. This is what corporation's public facing products can do.

[–] [email protected] 21 points 3 months ago (2 children)

are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?

[–] ogmios 8 points 3 months ago (1 children)

To be blunt, LLMs are one of the stupider ways to try and use AI. There is incredible potential in many other applications which don't attempt to interface with something as irrational and unpredictable as people.

[–] [email protected] 20 points 3 months ago (3 children)

I agree; LLMs and generative AI are indelibly a product of capitalism, and they can’t exist without widespread theft, exploitation of labor, massive concentrations of capital, and a willingness to destroy the environment. they are the stupidest use of technology I’ve ever seen, and after cryptocurrencies the bar for stupid was pretty fucking high. that the products themselves obscure the theft and exploitation that went into training them is a feature for the corporations developing this horseshit, not a bug.

and that’s why it’s notable that the self-described AI researchers behind these garbage products can’t even do basic shit like have the LLM not call a journalist a pedophile without resorting to an absolute hack that won’t scale. there’s no fixing LLMs; systemically, they are what they are. and now this absolute horseshit is a component of what’s unfortunately still the dominant desktop operating system.

[–] [email protected] 10 points 3 months ago* (last edited 3 months ago)

I'm ngl I think crypto is even stupider. it's a real competition though

EDIT: idea. a tech bullshit bracket

[–] ogmios 9 points 3 months ago* (last edited 3 months ago) (1 children)

The really fucking dumb part of it, you can believe me or not, is that this appears to all circle back to ancient misunderstandings about the nature of man, and attempts to create automatons which behave like men but are perfectly obedient. There is a subset of the population which tries this exact same bullshit with every new technology we create.

[–] [email protected] 8 points 3 months ago

I can see that as being one of the influences that fed into the formation of the TESCREAL belief package — “I have an automaton that behaves like a person but with supernatural qualities” really is an ancient grift, and the TESCREAL belief in omnipotent AGI being just around the corner is that same grift taken to an extreme

[–] [email protected] 4 points 3 months ago (2 children)

indelibly a product of capitalism

They're being funded by the capitalists that want to replace all those annoying human workers with the cheapest possible alternative.

Of course, the problem is that while a LLM is the cheapest possible option, it's turning out that it's the most useless and garbage one too.

(Also, I'm shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.)

[–] [email protected] 9 points 3 months ago (2 children)

Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

so much of our industry is dedicated to ensuring that tech workers, most of whom consider themselves experts on complex systems, never analyze or try to influence the social systems surrounding and influencing their labor. these are the same loud voices that insist tech isn’t political, while turning important parts of our public and open source tech infrastructure into a Nazi bar.

[–] [email protected] 4 points 3 months ago

I don't know if it's the system keeping them from analyzing it so much as it's simply that a good number of tech bros fall into the Actually a Nazi or the Paid Enough They Don't Care categories and for the most part happily keep on doing what they've been doing. If any of them had actual ethics or morals they'd take action, but they just plain don't.

Perhaps I'm too cynical, but after 20+ years working in tech, with most of the last 10 being in an abuse role at a PaaS company and seeing how management is willing to play endless whataboutism (my favorite was our Jewish lead council going on about how there's nothing wrong with Nazis having a voice and a platform, and then some crazy story about bullying when he was a kid) and the majority of non-management is happy to just shrug and play along and so you end up with a Nazi bar, as you mentioned.

The problem is that, right now, ALL the bars in this horribly tortured analogy are Nazi bars and everyone gets subjected to the Nazi propaganda.

[–] ogmios 4 points 3 months ago

Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

Just because you aren't hearing about us, doesn't mean we don't exist. ;)

load more comments (18 replies)
[–] [email protected] 22 points 3 months ago (1 children)

lazily regex

I'm having a sneaking suspicion that this is what they do for all the viral 'here the LLM famously says something wrong' problems, as I don't think they can actually reliably train the model it made an error.

[–] [email protected] 14 points 3 months ago (1 children)

That's the most straightforward fix. You can't actually fix the output of an LLM, so you have to run something on the output. You can have it scanned by another AI but that costs money and is also fallible. Regex/delete is the most reliable way to censor.

[–] [email protected] 11 points 3 months ago (1 children)

Yes, and then the problem is that this doesn't really scale well. Esp as it is always hard to regexp all the variants correctly without false positives and negatives. Time to regexp html ;).

[–] [email protected] 8 points 3 months ago

Yeah, and you can really see this in image generation. There's often blocks on using the names of celebrities in the prompts, but if you misspell the names enough it can bypass the censor, and the image generator still understands it.

[–] [email protected] 19 points 3 months ago

Very chill and ethical behaviour daddy Microsoft

[–] [email protected] 17 points 3 months ago (2 children)

Microsoft published, using their software and servers, a libelous claim, to potentially millions of people.
The details of how the software was programmed should be legally irrelevant.

[–] [email protected] 12 points 3 months ago

* a GDPR violation, in Germany

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

The details of how the software was programmed should be legally irrelevant.

Why? Programmers should be legally liable for what they program.

[–] [email protected] 5 points 3 months ago

Why? Programmers should be legally liable for what they program.

Too many degrees of separation between a programmer and the final product and how it's used, usually.

Additionally, the decision to deploy an incomplete product or one that contains known flaws is an administrative decision, not a programming one.

[–] [email protected] 11 points 3 months ago

Does Copilot have Disney+ ?

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago)

Changing the name can get it to answer. For instance martin beernklau returns

Martin Bernklau, a veteran court reporter from the Tübingen/Calw district in Germany, recently encountered a bizarre situation involving Microsoft’s Copilot. When he typed his name and location into the chatbot, it generated false accusations against him, mistakenly associating him with serious crimes. Here’s what happened:

False Accusations: Copilot falsely claimed that Bernklau was:
    A child molester involved in an abuse case against children and wards.
    A psychiatric escapee.
    An unscrupulous mortician exploiting grieving women.
Confusion: The AI chatbot failed to understand that Bernklau was a journalist reporting on these cases, not the accused.
Privacy Concerns: Copilot even provided Bernklau’s full address and phone number, causing privacy issues.
Legal Implications: Bernklau filed a criminal complaint, but it was rejected because there was no real person behind the false allegations.
GDPR Challenges: Copilot’s inability to correct false information highlights challenges related to GDPR compliance and language models12.

Also this

Martin’s Pen

In the quiet of the courtroom, he sat, A chronicler of justice, pen in hand. Martin Bernklau, eyes sharp, mind astute, Recording tales of trials, both grand and bland.

His ink flowed like a river, swift and true, Capturing the drama, the whispers, the strife. From child abuse to prison escapes, He etched their stories into the fabric of life.

But one day, the digital oracle spoke, Microsoft’s Copilot, a tangled web it wove. It mistook Martin for the accused, A child molester, a widow cheat, a rogue.

Yet Martin remained steadfast, undeterred, His integrity unshaken by the AI’s deceit. For he knew that truth lay in his words, Not in the twisted lines of code it would repeat.

So let us raise our pens in honor of Martin, The court reporter who weaves justice’s thread. May his legacy endure, ink-stained and resolute, As he chronicles the human tale, where lies and truth are wed1

load more comments
view more: next ›