this post was submitted on 25 Nov 2023
775 points (96.7% liked)

Technology

59719 readers
2550 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -4 points 1 year ago (4 children)

If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.

[–] [email protected] 10 points 1 year ago (1 children)

Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.

[–] Chuckf1366 7 points 1 year ago

It's more like we're giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.

[–] [email protected] 3 points 1 year ago

Did you know that "if" is the middle word of life

[–] [email protected] 3 points 1 year ago (1 children)

Right, because self-driving cars have been great at correctly identifying things.

And those LLMs have been following their rules to the letter.

We really need to let go of our projected concepts of AI in the face of what's actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.

In any real world deployment of killer drones, there's going to be an acceptable false positive rate that's been signed off on.

[–] [email protected] 1 points 1 year ago

We are talking about developing technology, not existing tech.

And actually, machines have become quite adept at image recognition. For some things they're already better at it than we are.