Over 160 companies working in artificial intelligence have signed a pledge not to develop lethal autonomous weapons.
The pledge, which was signed by 2,400 individuals including representatives from Google DeepMind, the European Association for AI and University College London, says that signatories will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
The pledge was announced by Max Tegmark, president of the Future of Life Institute, which organized the effort.
WHY ARTIFICIAL INTELLIGENCE MUST DISCLOSE THAT IT'S AI
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world—if we stigmatize and prevent its abuse,” he said in a statement. “AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”
Another organizer of the pledge pointed out a key problem with giving machines the power to kill without any human input.
“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way,” said Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney.
Beyond the ethical questions posed by lethal autonomous weapons, many critics are worried that these weapons could be easier to hack and therefore more likely to end up on the black market or in the hands of bad actors like ISIS.
The prowess of AI has raised concerns by many, including tech titan Elon Musk. In September 2017, Musk tweeted AI could caused World War III. In addition, Musk sounded the alarm bells regarding AI, saying it will "beat humans at everything" within the next few decades, labeling it humanity's "biggest risk."
Fox News' Chris Ciaccia contributed to this story.