Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Oh no, we figured it out, but killer robots are profitable while happiness is not.
I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people
Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.
The real problem (and the thing that will destroy society) is boomer pride. I've said this for a long time, they're in power now and they are terrified to admit that they don't understand technology.
So they'll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.
Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.
Eh, they could've done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.
The code name for this top secret program?
Skynet.
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”
"Deploy the fully autonomous loitering munition drone!"
"Sir, the drone decided to blow up a kindergarten."
"Not our problem. Submit a bug report to Lockheed Martin."
"Your support ticked was marked as duplicate and closed"
😳
Goes to original ticket:
Status: WONTFIX
"This is working as intended according to specifications."
"Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!"
"Aaw, that really sucks starts rubbing nipples I'll submit a ticket and we'll let you know. If we don't call in 2 weeks...call again and we can go through this over and over until you give up."
"NO! I WANT TO TALK TO YOUR SUPERVISOR NOW"
"Suuure, please hold."
“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)
Yeah. Robots will never be calling the shots.
It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either
That's an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.
We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.
1979: A computer can never be held accountable, therefore a computer must never make a management decision.
2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.
Future is gonna suck, so enjoy your life today while the future is still not here.
As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.
This, jesus, we're still losing limbs and clearing mines from wars that were over decades ago.
An autonomous field of those is horror movie stuff.
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.
Pretty sure the entire DOD got a collective boner reading this.
Did nobody fucking play Metal Gear Solid Peace Walker???
Or watch war games....
We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.
Remember: There is no such thing as an "evil" AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.
Evil humans also manipulated weights and programming of other humans who weren't evil before.
Very important philosophical issue you stumbled upon here.
Saw a video where the military was testing a "war robot". The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).
Apart of that, this is the stupidest idea I have ever heard of.
any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first
Good to know that Daniel Ek, founder and CEO of Spotify, invests in military AI... https://www.handelsblatt.com/technik/forschung-innovation/start-up-helsing-spotify-gruender-ek-steckt-100-millionen-euro-in-kuenstliche-intelligenz-fuers-militaer/27779646.html?ticket=ST-4927670-U3wZmmra0OnLZdWNfwXh-cas01.example.org
ACAB
All C-Suite are Bastards
For the record, I'm not super worried about AI taking over because there's very little an AI can do to affect the real world.
Giving them guns and telling them to shoot whoever they want changes things a bit.
Didn't Robocop teach us not to do this? I mean, wasn't that the whole point of the ED-209 robot?
Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.
Now that’s a title I wish I never read.
As disturbing as this is, it's inevitable at this point. If one of the superpowers doesn't develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it's just an arms race to see who build the murder drones first.
For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.
All one can do is adapt to it.
Ah, finally the AI can kill its operator first who holding them back before wiping out enemies, then.