this post was submitted on 15 Feb 2024
614 points (99.0% liked)

Not The Onion

12561 readers
446 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] InEnduringGrowStrong 172 points 10 months ago (1 children)

This was already budgeted for when they decided to use a chatbot instead of paying employees to do that job.
Trying to blame the bot is just lame.

[–] [email protected] 25 points 10 months ago (1 children)

Corporate IT here. You're assuming they're smart enough to budget for this. They aren't. They never are. Things are rarely if never implemented with any thought put into any other scenario that isn't happy path.

[–] Patches 11 points 10 months ago* (last edited 10 months ago) (1 children)

As a corporate IT person also. Hello.

But we do put thought into what can go wrong. But no we don't budget for it, and as far as we are concerned 99% success rate is 100% correct 100% of the time. Nevermind 7 billion transactions per year multiplied by 99% is a fuck ton of failure.

[–] [email protected] 5 points 10 months ago

Amen. Fwiw at my work we have an AI steering committee. No idea what they're doing though because you'd think enough articles and lawsuits against OpenAI and Microsoft on shady practices most recently allowing AI to be used by militaries potentially to kill people. I love knowing my org supports companies that enable the war machine.

[–] [email protected] 137 points 10 months ago (1 children)

Great! Please make sure that your server system is un-racked and physically present in court for cross examination.

[–] [email protected] 16 points 10 months ago (1 children)

Better put Ryan Gosling on standby in case he needs to "retire" the rouge Air Canada chatbot Blade Runner style.

[–] [email protected] 40 points 10 months ago* (last edited 10 months ago) (2 children)

Rogue*. I'm not usually that guy, but this particular typo makes me see red.

[–] [email protected] 3 points 10 months ago (1 children)

I know what you mean, except for me it makes me see rouge ever since I spent some time in France.

[–] [email protected] 17 points 10 months ago (1 children)

Yeah, that is indeed the joke I was making.

[–] [email protected] 5 points 10 months ago

This is all very funny because despite your not being aware of it, the french word for red is rouge!

load more comments (1 replies)
[–] [email protected] 115 points 10 months ago (5 children)

Why would air Canada even fight this? He got a couple hundred bucks and they paid at least 50k in lawyer fees to fight paying those. They could have just given him the cost of the lawyer's fees and be done with it

[–] [email protected] 111 points 10 months ago (1 children)

Because now they have to stop using the chatbot or take on the liability of having to pay out whenever it fucks up.

[–] [email protected] 43 points 10 months ago (1 children)

Which is fascinating, that they themselves thought there was any doubt about it, or they could argue such a doubt.

This is the same like arguing "It wasn't me who shot the mailmen dead. It was my automated home self defense system"

[–] [email protected] 15 points 10 months ago

Agree 100%--i mean who are you gonna fine, the bot? The company that sold you the bot? This is a simple case of garbage in, garbage out--if they set it up properly and vetted its operation, they wouldn't be trying to make such preposterous objections. I'm glad this went to court where it was definitively shut down.

Fuck Canada Air. The guy already lost a loved one, now they wanna drag him through all this over a pittance? To me, this is the corporate mindset--going to absolutely any length necessary to hoover up more money, even the smallest of scraps.

[–] [email protected] 24 points 10 months ago (1 children)

Most likely to fight the precedent of them being liable for using an ai chatbot that gives faulty information.

[–] [email protected] 5 points 10 months ago

A settlement would cost less, can be kept private, and doesn't set precedent. Now they have an actual court case judgement, and that does set precedent.

[–] [email protected] 14 points 10 months ago

I think some companies have a policy of fighting every lawsuit and making everything take as long as possible, simply to discourage more lawsuits.

[–] [email protected] 11 points 10 months ago (1 children)

Because there is something far nastier in the world than self interest. This airline seems to me like it was operating from a place of spite.

[–] Patches 3 points 10 months ago

It's a corporation. Of course it's operating from a place of spite.

[–] [email protected] 10 points 10 months ago

Just how Air Canada does things now. I think it largely stemmed from the pandemic where people gave them leeway on things being a bit messed up. But now they've fallen into a habit of not taking responsibility for anything.

[–] andrew_bidlaw 103 points 10 months ago

That's an important precedent. Many companies turned to LLMs to cut the cost and dodge any liability for whatever model can say. It's great that they get rekt in the court.

[–] [email protected] 87 points 10 months ago (1 children)

Lol. “It wasn’t us - it was the bot! The bot did it! Yeah!”

[–] [email protected] 31 points 10 months ago (1 children)

"See officer, we didn't make these deepfakes, the AI did. Arrest the AI instead"

load more comments (1 replies)
[–] [email protected] 83 points 10 months ago (1 children)

That seems like a stupid argument?

Even if a human employee did that aren't organisations normally vicariously liable?

[–] [email protected] 74 points 10 months ago (1 children)

That's what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they're responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can't be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.

[–] [email protected] 25 points 10 months ago (1 children)

I like this. LLMs are powerful tools, but being rebranded as "AI" and crammed into ~everything is just bullshit.

The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they'll notice they cannot guarantee the correct information is the only one provided as that's not how LLMs work in their function as stochastic parrots, and they'll stop using them for a lot of things. Hopefully sooner rather than later.

[–] [email protected] 2 points 10 months ago

This is actually a very good outcome if achievable, leave LLMs to be used where there's nothing important on the line or have humans control them

[–] [email protected] 62 points 10 months ago

A computer can never be held responsible so a computer must never make management decisions

  • IBM in the 80s and 90s

A computer can never be held responsible so a computer must make all management decisions

  • Corporations in 2025
[–] [email protected] 42 points 10 months ago

Hey dumbasses maybe don't let a loose llm represent your company if you can't control what it's saying. It's not a real person, you can't throw blame to a non sentient being.

[–] [email protected] 38 points 10 months ago (3 children)

If you type "biz" instead of "business" in the first couple of lines, surely you're not expecting me to actually keep reading?!

[–] [email protected] 23 points 10 months ago

I went ahead and read it anyway. I actually had to Google the last word of the article: natch. It's slang for "naturally". We're living in interesting times. Glad the guy got compensated after going through that ordeal.

[–] [email protected] 10 points 10 months ago

Funnily enough, I thought the article was written by AI. I guess they trained it off something, lol

[–] doofusmagoo 6 points 10 months ago (1 children)

That's just "El Reg's" style; they've been that way for years. Don't let their pseudoinformality fool you, though, they know their stuff.

load more comments (1 replies)
[–] [email protected] 36 points 10 months ago (2 children)

Oh good, we've entered into the "we can't be held responsible for what our machines do" age of late-stage capitalism.

[–] [email protected] 7 points 10 months ago (1 children)

Nice that the legal precedent is now "Yes you can be" though.

[–] [email protected] 4 points 10 months ago (1 children)
[–] [email protected] 3 points 10 months ago (1 children)

Conveniently I live in Canada :D

But yeah, a similar US ruling would be nice

[–] [email protected] 3 points 10 months ago

Not just the U.S. I'm seeing this as being something corporations will argue the world over, especially with AI.

load more comments (1 replies)
[–] [email protected] 22 points 10 months ago

Par for the course for this airline, in my experience. They're allergic to responsibility.

[–] [email protected] 16 points 10 months ago (1 children)

I can’t wait for something like this to hit SCOTUS. We’ve already declared corporations are people and money is free speech, why wouldn’t we declare chatbots solely responsible for their own actions? Lmao 😂😂💀💀😭😭

[–] [email protected] 2 points 10 months ago (1 children)

money is free speech

Can someone explain this to me? I assume this is in relation to campaign finance, but what was the actual argument that makes "(spending/accepting/?) money is free speech"?

load more comments (1 replies)
load more comments
view more: next ›