When people see machines that respond like humans, or computers that perform feats of strategy and cognition mimicking human ingenuity, they sometimes joke about a future in which humanity will need to accept robot overlords.
But buried in the joke is a seed of unease. Science-fiction writing and popular movies, from "2001: A Space Odyssey" (1968) to "Avengers: Age of Ultron" (2015), have speculated about artificial intelligence (AI) that exceeds the expectations of its creators and escapes their control, eventually outcompeting and enslaving humans or targeting them for extinction.
Conflict between humans and AI is front and center in AMC's sci-fi series "Humans," which returned for its third season on Tuesday (June 5). In the new episodes, conscious synthetic humans face hostile people who treat them with suspicion, fear and hatred. Violence roils as Synths find themselves fighting for not only basic rights but their very survival, against those who view them as less than human and as a dangerous threat. [Can Machines Be Creative? Meet 9 AI 'Artists']
Even in the real world, not everyone is ready to welcome AI with open arms. In recent years, as computer scientists have pushed the boundaries of what AI can accomplish, leading figures in technology and science have warned about the looming dangers that artificial intelligence may pose to humanity, even suggesting that AI capabilities could doom the human race.
But why are people so unnerved by the idea of AI?
An "existential threat"
Elon Musk is one of the prominent voices that has raised red flags about AI. In July 2017, Musk told attendees at a meeting of the National Governors Association, "I have exposure to the very cutting-edge AI, and I think people should be really concerned about it."
"I keep sounding the alarm bell," Musk added. "But until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal."
Earlier, in 2014, Musk had labeled AI "our biggest existential threat," and in August 2017, he declared that humanity faced a greater risk from AI than from North Korea.
Physicist Stephen Hawking, who died March 14, also expressed concernsabout malevolent AI, telling the BBC in 2014 that "the development of full artificial intelligence could spell the end of the human race."
It's also less than reassuring that some programmers — particularly those with MIT Media Lab in Cambridge, Massachusetts — seem determined to prove that AI can be terrifying.
A neural network called "Nightmare Machine," introduced by MIT computer scientists in 2016, transformed ordinary photos into ghoulish, unsettling hellscapes. An AI that the MIT group dubbed "Shelley" composed scary stories, trained on 140,000 tales of horror that Reddit users posted in the forum r/nosleep.
"We are interested in how AI induces emotions — fear, in this particular case," Manuel Cebrian, a research manager at MIT Media Lab, previously told Live Science in an email about Shelley's scary stories.
Fear and loathing
Negative feelings about AI can generally be divided into two categories: the idea that AI will become conscious and seek to destroy us, and the notion that immoral people will use AI for evil purposes, Kilian Weinberger, an associate professor in the Department of Computer Science at Cornell University, told Live Science. [Artificial Intelligence: Friendly or Frightening?]
"If super-intelligent AI — more intelligent than us — becomes conscious, it could treat us like lower beings, like we treat monkeys," he said. "That would certainly be undesirable."
However, fears that AI will develop awareness and overthrow humanity are grounded in misconceptions of what AI is, Weinberger noted. AI operates under very specific limitations defined by the algorithms that dictate its behavior. Some types of problems map well to AI's skill sets, making certain tasks relatively easy for AI to complete. "But most things do not map to that, and they're not applicable," he said.
This means that, while AI might be capable of impressive feats within carefully delineated boundaries — playing a master-level chess game or rapidly identifying objects in images, for example — that's where its abilities end.
"AI reaching consciousness — there has been absolutely no progress in research in that area," Weinberger said. "I don't think that's anywhere in our near future."
The other worrisome idea — that an unscrupulous human would harness AI for harmful reasons — is, unfortunately, far more likely, Weinberger added. Pretty much any type of machine or tool can be used for either good or bad purposes, depending on the user's intent, and the prospect of weapons harnessing artificial intelligence is certainly frightening and would benefit from strict government regulation, Weinberger said.
Perhaps, if people could put aside their fears of hostile AI, they would be more open to recognizing its benefits, Weinberger suggested. Enhanced image-recognition algorithms, for example, could help dermatologists identify moles that are potentially cancerous, while self-driving cars could one day reduce the number of deaths from auto accidents, many of which are caused by human error, he told Live Science.
But in the "Humans" world of self-aware Synths, fears of conscious AI spark violent confrontations between Synths and people, and the struggle between humans and AI will likely continue to unspool and escalate — during the current season, at least.
Editor's note: This is the final feature in a three-part series of articles related to AMC's "Humans." The third season debuted June 5 at 10 p.m. EDT/9 p.m. CDT.
Original article on Live Science.