Like Sarah Connor, I was born around 1965. But unlike Sarah Connor, I don’t plan on saving the world from killer robots. That’s something we all have to do.

How we will do this doesn’t involve time travel. Or any rocket science. We simply have to agree that building machines that decide who lives or dies is morally unacceptable.

The killer robots I’m talking about aren’t T101 Terminator robots. It’s stupid AI that I’m most worried about. They are much simpler technologies that are just a few years away. Think of the drones that we see above the skies of Afghanistan or Iraq. These are only semi-autonomous weapons. Such a drone can fly itself most of the time but a soldier still makes the final life or death decision to fire its Hellfire missile.

It is, however, a small technical leap to replace that soldier by a computer. The UK’s ministry of defence claim this is possible today. I would agree. And the BAE Systems Taranis drone is a prototype of this future that has been flying now for several years.

How then do we prevent a future full of killer robots? The technologies that go into a fully autonomous drone are going to be invented. They are pretty much the same technologies that go into autonomous cars. An algorithm that identifies, tracks and avoids pedestrians can easily be changed to identify, track and target combatants.

And unlike nuclear weapons, autonomous weapons are going to be cheap and eventually very effective weapons. The arms companies of the world will make a killing (pun very much intended) selling autonomous weapons to all sides of every conflict.

Cheap. Effective. And easily available. This hardly sounds like a good recipe for a ban. However, there’s a good historical precedent that suggests we can be hopeful.

Chemical weapons are cheap just like autonomous weapons will be. And like autonomous weapons, chemical weapons use technologies that are widely available and impossible to ban.

Now, while we didn’t ban chemistry, we did ban chemical weapons. And even if chemical weapons are occasionally used, it’s hard to argue that the world would be a better place if arms companies could sell chemical weapon. Or if evil despots could launch chemical attacks on their populations without fear of censure. Or a distant date in a court in the Hague.

And that’s precisely how we are going to prevent a future full of killer robots. We agree that it is morally unacceptable to let machines make life or death decisions. We insist there is always a human in the loop. Just like we decided it was morally unacceptable to use chemical weapons.

The rest follows from there. Arms companies don’t manufacture autonomous weapons. There are a lot of more profitable activities that won’t get them put on some UN blacklist. As a result, its not easy to get your hands on autonomous weapons. And people hesitate before using them.

It goes beyond ethics. By removing humans completely, autonomous weapons will be weapons of mass destruction. One programmer will be able to do what previously took an army of humans. They will industrialise warfare. This is why it has been called the third revolution in warfare, after the invention of gun powder and of nuclear weapons.

They will also be weapons of terror. We don’t know yet how to make robots that can follow international humanitarian law. And even when we can, there are plenty of bad actors out there, terrorists and rogue states, that will remove any ethical safeguards.

Autonomous weapons will also destabilise the current unsettled world order. You will no longer need to be a superpower to maintain a powerful army. A couple of 3-D printers and a modest bank balance will suffice.

And when someone attacks you, it will be very difficult to know who it was. Capture a killer robot. Open it up. All it will say is “Intel Inside”. That will only encourage those with evil intent.

This brings me to earlier this week when we took a small step down the road to a world where killer robots are banned. Over 50 of my colleagues, leading AI and robotics researchers from 30 countries around the world, declared a boycott of KAIST. This is the MIT of South Korea.

The boycott was a response to KAIST opening an AI weapons lab in collaboration with Hanwha Systems, a company that has attracted considerable negative publicity for manufacturing cluster munitions, contrary to a UN treaty. A decade ago, Norway’s future fund put Hanwha on their ethical blacklist.

The Korea Times reported that KAIST was “joining the global competition to develop autonomous arms.” And it described some of the initial projects like an unmanned submarine where it is hard to imagine any sort of meaningful human control.

The president of KAIST responded swiftly to our boycott. He declared that KAIST would not develop autonomous weapons. Indeed, he went further and affirmed that meaningful human control would always be maintained. This was an overnight success.

It sets a very clear precedent for the future. I hope now that KAIST will lobby the Republic of Korea to call for meaningful human control at next week’s UN meeting on the topic of killer robots. Such small steps will eventually lead to an outright ban.

Toby Walsh is Scientia Professor of Artificial Intelligence at the University of New South Wales

#awvi,#neuroscience

via Artificial intelligence (AI) | The Guardian https://www.theguardian.com/technology/artificialintelligenceai

April 6, 2018 at 05:27AM