• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Musk, Hawking, Wozniak call for ban on autonomous weapons and military AI

Status
Not open for further replies.
Obligatory:

Terminator_28453_4db5a1135e73d67af40067b5_1303953272-640x360.jpg
A very large number of scientific and technological luminaries have signed an open letter calling for the world's governments to ban the development of "offensive autonomous weapons" to prevent a "military AI arms race."

The letter, which will be presented at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires tomorrow, is signed by Stephen Hawking, Elon Musk, Noam Chomsky, the Woz, and dozens of other AI and robotics researchers.

For the most part, the letter is concerned with dumb robots and vehicles being turned into smart autonomous weapons. Cruise missiles and remotely piloted drones are okay, according to the letter, because "humans make all targeting decisions." The development of fully autonomous weapons that can fight and kill without human intervention should be nipped in the bud, however.

Here's one of the main arguments from the letter:

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

Later, the letter draws a strong parallel between autonomous weapons and chemical/biological warfare:

"Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits."
http://arstechnica.com/gadgets/2015...r-ban-on-autonomous-weaponry-and-military-ai/
 
govmnts won't give a shit about some rich people asking them not to do something that could result in all of mankind ending
 
Einstein calls for ban on Nuclear technology in military, etc.

Science can be used for both good and evil and no amount of elbow twisting can change that hard fact.
 

KarmaCow

Member
"Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits."

Kinda dumb of arstechnica to use a Terminator pic then.
 

Mimosa97

Member
We all know it won't make them stop and we all know the general population will support the military again and again because " dem terrorists are comin' to get us " and " we could kill terrorists without putting any of our soldiers in danger "

So yeah we'll get the dystopian future that we deserve.
 
Just preemptively develop a far superior and open military AI system to share with the world. Then there wouldn't need to be an AI arms race because everyone would have it!
 
I don't know, sometimes the best and fastest advances are made when people believe their lives depends on it. Like the Space Race. On the other side, we will also make our best AI killing programs when they take over the world.
 

Palmer_v1

Member
Aren't some weapons already controlled by some form of AI? Patriot missile batteries, for example, don't require a human operator to tell them when to shoot down enemy missiles.

A blurb from http://science.howstuffworks.com/patriot-missile.htm:

"It is even possible for the Patriot missile system to operate in a completely automatic mode with no human intervention at all. An incoming missile flying at Mach 5 is traveling approximately one mile every second. There just isn't a lot of time to react and respond once the missile is detected, making automatic detection and launching an important feature."

I just don't see a difference between that and say, a small robot with an anti-tank rifle that can identify and destroy enemy tanks and APCs autonomously.
 

Wthermans

Banned
The technology exists and as such will happen. There is no stopping nor halting it.

That said Musk is taller than I imagined, utilizes a nanny to parent instead of himself, and has a wife who is 15 years his younger.

I don't idolize him like I used to.
 
Yeah. Today it's military robots. Tomorrow they'll ban and destroy all sex robots because they have sharp dicks. Then they'll ban and destroy all home safety robots because they can hurt stupid old ladies. Then they'll ban all robots and drive them all underground to live in poverty and hunger.

No. We need to defend the robots. And we need to do it today.
 

Crud

Banned
If anything the AI becomes self aware sees through the creators bullshit lies about terrorism and kills them. Then the AI can be used for something actually good.
 

hipbabboom

Huh? What did I say? Did I screw up again? :(
Musk is totally going to be come Ra's al Ghul in about 15 to 20 years. The giga-factory is totally a front for the Lazarus pit he discovered at the site and he'll use his environmental and social beliefs to try and snuff out humanity
 
I'm more scared of the reality that with drones, the cost to kill somebody is basically negligible at this point. Whether remote controlled or not, once they start becoming available to non-state actors the world's going to be a very different place.
 

Mask

Member
Are we sure Musk doesn't want a ban just so he can start developing his own in private, eventually dropping off his own mortal body and becoming one with the AI before marching on the human race?

I'm on to you, Musk.
 

TAJ

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
Humans piloting them remotely is a huge security risk, though.
 
Of course, the government justification would be that, even if they agree, some rogue nation(s) will do it anyway and then have a military advantage over everyone else. Can't have that.

Nevermind where that rogue will somehow get ahold of the resources and technical know-how.
 
This does have an effect. By advancing a view that makes the general population resist the development of Military A.I. we force all research underground and make it a potential PR problem. This limits the scope of how much they can fund it and that governments ultimately must have smaller teams work on it to avoid things like whistleblowers and conscientious objectors.

I oppose military research and will actively vote against political parties that openly support it.
 

Easy_D

never left the stone age
Sounds like it could be a lucrative business. Build killer AI, sell to military. Killer AI turns against humanity. Build protection AI, sell to public.

But what will we do when the protection AI turns on us? What will we do!?
 

enzo_gt

tagged by Blackace
Incredibly reasonable precaution given how many alternatives we have to murdering each other anyways. Hard to disagree with.
 
I never thought I'd see the day that I would call Hawking a Luddite. I'm not sure if that is really appropriate here, but it is the first thing that popped into my head. I need to read more about this before I can form a more concrete decision though.
 

PSqueak

Banned
Humans piloting them remotely is a huge security risk, though.

Only in the way that it can be hijacked by other humans, the aim of this is that killing is still 100% on the hands and brains of humans, because when you create an autonomus AI that can decide to attack on their own, then you have a problem.

The idea is not the fear of "the rise of the machines" but the fear of letting a machine decide who should be killed and the moral repercursions of this, like, when a machine decides to kill non military targets because whatever AI logic deemed them "risks", who is put on trail for this? who is punished? Can AI commit war crimes? etc.
 

TAJ

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
Only in the way that it can be hijacked by other humans, the aim of this is that killing is still 100% on the hands and brains of humans, because when you create an autonomus AI that can decide to attack on their own, then you have a problem.

The idea is not the fear of "the rise of the machines" but the fear of letting a machine decide who should be killed and the moral repercursions of this, like, when a machine decides to kill non military targets because whatever AI logic deemed them "risks", who is put on trail for this? who is punished? Can AI commit war crimes? etc.

I understand that, but do you want killing machines that can be hacked?
Anyway, I trust computers more than 18yo Americans who chose to join the military.
 
Status
Not open for further replies.
Top Bottom