As President Trump gets set to sign an executive order directing federal agencies to prioritize research and development in artificial intelligence, tech giants such as Google, Microsoft and others are warning about the risks AI may hold.
In their annual reports, both Google and Microsoft prominently mention the potential negatives associated with AI and machine learning technologies that each company is working on and how it may affect their brands, as well as certain "ethical, technological, legal, and other challenges."
“[N]ew products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results,” Google parent Alphabet listed in its 10-K filing with the SEC.
'KILLER ROBOTS' ARE COMING AND THE WORLD IS TRYING TO FIGURE OUT WHAT TO DO WITH THEM
Microsoft made a similar disclosure in its risk section of its 10-K, writing:
"We are building AI into many of our offerings and we expect this element of our business to grow. We envision a future in which AI operating in our devices, applications, and the cloud helps our customers be more productive in their work and personal lives. As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm."
The disclosures were first spotted and reported publicly by Wired.
Amazon, who has come under fire for alleged bias in some of its AI use cases, including its controversial facial recognition software, did not mention AI as a specific risk in its annual filing. However, it did note that government regulation on it could adversely affect its business.
"It is not clear how existing laws governing issues such as property ownership, libel, data protection, and personal privacy apply to the Internet, e-commerce, digital content, web services, and artificial intelligence technologies and services," Amazon wrote in the risk section of its 10-K. "Jurisdictions may regulate consumer-to-consumer online businesses, including certain aspects of our seller programs. Unfavorable regulations, laws, and decisions interpreting or applying those laws and regulations could diminish the demand for, or availability of, our products and services and increase our cost of doing business."
Facebook did mention AI in its 10-K (though not among its risk factors), while Apple did not make any mention of AI or machine learning in its annual filing.
'KILLER ROBOTS' ARE COMING AND THE WORLD IS TRYING TO FIGURE OUT WHAT TO DO WITH THEM
Concerns for misuse and regulation
The promise of AI has been around long before any of these companies were created. Alan Turing, perceived by many as the father of "modern artificial intelligence," created the famous "Imitation Game test" (now known as the Turing test) to determine levels of machine intelligence in 1951. But it's only recently, perhaps within the last decade or so, that AI has exploded into the mainstream, sparking fears about its potential and concerns for society.
“True artificial intelligence will be the greatest technological breakthrough since the semiconductor," Todd Probert, Vice President, Raytheon Intelligence, Information and Services, said in an email to Fox News. "The technology has the power to improve everything from our ability to detect and fight cancer cells to piloting driverless cars to revolutionizing the way we fight wars. The future is not Hollywood’s Skynet."
Still, there is great concern about the potential perils of AI, especially strong or broad AI, which has the potential to make intelligent decisions and use critical thinking skills just as a human can.
'BIG BROTHER': AI PIONEER FEARS CHINA'S USE OF TECHNOLOGY FOR SURVEILLANCE AND CONTROL
Recently, Yoshua Bengio, a Canadian computer scientist and co-founder of Montreal-based AI software company Element AI, said he was fearful about the technology being deployed to surveil and control people, specifically mentioning the potential for China to misuse the capabilities of machine learning.
"This is the '1984' Big Brother scenario," he told Bloomberg in an interview, referencing George Orwell's dystopian novel that was published in 1949. "I think it's becoming more and more scary."
Prior to his death, physicist Stephen Hawking warned about the perils of AI. In 2017, he told attendees at the Web Summit conference that humanity "cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.”
STEPHEN HAWKING SAYS ARTIFICIAL INTELLIGENCE COULD 'DESTROY' HUMANITY
Trump initiative and Silicon Valley's role
The plan that Trump will sign, called the American AI Initiative, is intended to enhance national and economic security and improve Americans' quality of life. It directs federal agencies to make data and computing resources more available to artificial intelligence experts while maintaining security and confidentiality.
It also says federal agencies will establish guidance to ensure the new technologies are developed in a safe, trustworthy way.
The executive order comes on the heels of concerns out of Silicon Valley about the government using privately developed artificial intelligence for certain use cases, including warfare.
Amid criticism from its own employees and several other groups, Google said last year it would not let its artificial intelligence be used in weapons or for surveillance.
"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas," Google CEO Sundar Pichai wrote in a blog post. "These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."
For its part, Amazon has said that it will continue to work with the U.S. Dept. of Defense in any way the company feels appropriate, Fox Business reported. “We are going to continue to support the DoD, and I think we should,” Amazon CEO Jeff Bezos said during the Wired 25 conference in San Francisco in October. “One of the jobs of a senior leadership team is to make the right decision, even when it’s unpopular. If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.”
Microsoft has made similar comments about working with the U.S. military, specifically mentioning its use of AI.
While companies jockey amongst themselves to determine who, if any of them, will work with the U.S. government so that AI is used in a safe and responsible manner, some luminaries have warned about the misuse of the technology. A few have even gone so far as to say it could cause World War III.
ELON MUSK THINKS ARTIFICIAL INTELLIGENCE COULD CAUSE WORLD WAR III
Elon Musk has warned that if one country's AI decides a preemptive strike could lead to victory, it may decide to so, regardless if it comes from the behest of the country's leaders.
Musk, who has warned that humans might become an "endangered species" if AI were to take over, is building a new company, Nueralink, to aid humanity and "achieve a symbiosis with artificial intelligence and to achieve a democratization of intelligence such that it's not monopolistically held by governments and large corporations."
Fox News' Christopher Carbone and the Associated Press contributed to this report.