Artificial intelligence researchers from nearly 30 countries are boycotting a South Korean university over concerns a new lab in partnership with a leading defence company could lead to “killer robots”.

More than 50 leading academics signed the letter calling for a boycott of Korea Advanced Institute of Science and Technology (KAIST) and its partner, defence manufacturer Hanwha Systems. The researchers said they would not collaborate with the university or host visitors from KAIST over fears it sought to “accelerate the arms race to develop” autonomous weapons.

“There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern,” said Toby Walsh, the organiser of the boycott and a professor at the University of New South Wales. “This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”

The boycott comes ahead of a United Nations meeting in Geneva next week on autonomous weapons, and more than 20 countries have already called for a total ban on killer robots. The use of AI in militaries around the world has sparked fears of a Terminator-like situation and questions have been raised about the accuracy of such weapons and their ability to distinguish friend from foe.

Hanwha is one of South Korea’s largest weapons manufacturers, and makes cluster munitions which are banned in 120 countries under an international treaty. South Korea, along with the US, Russia and China, are not signatories to the convention.

Walsh was initially concerned when a Korea Times article described KAIST as “joining the global competition to develop autonomous arms” and promptly wrote to the university asking questions but did not receive a response.

KAIST’s president, Sung-Chul Shin, said he was saddened to hear of the boycott. “I would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots,” Shin said in a statement.

“As an academic institution, we value human rights and ethical standards to a very high degree,” he added. “I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”

KAIST opened the research centre for the convergence of national defence and artificial intelligence on 20 February, with Shin saying at the time it would “provide a strong foundation for developing national defence technology”.

The centre will focus on “AI-based command and decision systems, composite navigation algorithms for mega-scale unmanned undersea vehicles, AI-based smart aircraft training systems, and AI-based smart object tracking and recognition technology”, the since-deleted announcement said.

South Korea’s Dodaam Systems already manufactures a fully autonomous “combat robot”, a stationary turret, capable of detecting targets up to 3km away. Customers include the United Arab Emirates and Qatar and it has been tested on the highly militarised border with North Korea, but company executives told the BBC in 2015 there were “self-imposed restrictions” that required a human to deliver a lethal attack.

The Taranis military drone built by the UK’s BAE Systems can technically operate entirely autonomously, according to Walsh, who said killer robots made everyone less safe, even in a dangerous neighbourhood.

“Developing autonomous weapons would make the security situation on the Korean peninsula worse, not better,” he said. “If these weapons get made anywhere, eventually they would certainly turn up in North Korea and they would have no qualms about using them against the South.”

#ai,#Cloud,#awvi

via Artificial intelligence (AI) | The Guardian http://bit.ly/2iFrAme

April 5, 2018 at 12:33AM