this post was submitted on 05 Aug 2024
90 points (98.9% liked)

Technology

58091 readers
3144 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] JohnDClay 30 points 1 month ago (2 children)

The person who decided to use the AI

[–] [email protected] 10 points 1 month ago (1 children)

There are going to be a lot of instances going forward where you don't know you were interacting with an AI.

If there's a quality check on the output, sure, they're liable.

If a Tesla runs you into an ambulance at 80mph...the very expensive Tesla lawyers will win.

It's a solid quandary.

[–] JohnDClay 4 points 1 month ago (1 children)

Why would the lawyer defendant not know they're interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?

[–] [email protected] 1 points 1 month ago

Immediate things that come to mind are bots on Reddit. Twitter is 70% bot traffic. People interact with them all day every day and don't know.

That quickly spirals into customer service. If you're not talking to a guy with a thick Indian accent, could be a bot at this point.

A lot of professional business services are exploring AI hard...what happens when one tells the business to do something monumentally stupid and said business does it? Is it the people who are training the AI? Is the machine at fault for a hallucination? Is it the poor schmuck at the bottom that pushed the delete button?

It's not cut and dry when you're interacting with a machine any more.

[–] [email protected] 8 points 1 month ago (1 children)

My guess is that it's gonna wind up being a split, and it's not going to be unique to "AI" relative to any other kind of device.

There's going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it's the person who decided to use it.

But if the device doesn't act within those expectations, then it's not them, may be the device manufacturer.

[–] JohnDClay 4 points 1 month ago

Yeah, if the company making the ai makes false claims about it, then it'd be on them at least partially.