Scores of technology experts and college professors across different academic backgrounds signed onto an open letter calling for a six-month pause on developing rapidly-evolving AI technology, which they say threatens humanity and society.
At the heart of the argument for the pause is to give policymakers space to develop safeguards that would allow for researchers to keep developing the technology, but not at the reported threat of upending the lives of people across the world with disinformation.
"The federal government needs to play a central role using legislation and regulations to require the companies to impose much stricter safety measures and guardrails. However, legislation and regulations take time, moving at bureaucratic speed, while generative AI is evolving at exponential speed," Geoffrey Odlum, a retired 28-year diplomat who currently serves as president of Odlum Global Strategies, which advises the government and corporations on national security and tech policy issues, told Fox News Digital.
Odlum is one of the more than 1,000 signatories of an open letter calling for all AI labs to pause their research for at least six months, arguing "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
The Elon Musk-backed letter specifically calls for AI labs to pause training systems that are more powerful than GPT-4, the latest deep learning model from OpenAI, which "exhibits human-level performance on various professional and academic benchmarks," according to the lab.
After the letter was released Wednesday, some critics dismissed it as "just dripping with AI hype," including the authors behind a study cited in the letter.
"They basically say the opposite of what we say and cite our paper," said computer scientist Timnit Gebru on Twitter. Gebru is an author behind a study cited in the letter as alleged proof that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."
Gebru was joined by her co-author Emily Bender in lambasting the letter, saying their research was not about AI being "too powerful," but instead focused on the risks of AI and its "concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem," the Economist reported.
"Legislation and regulations take time, moving at bureaucratic speed, while generative AI is evolving at exponential speed. That's why I support the call for a 6-month pause in further developments[.]"
However, to those who signed on, they described that AI technology has essentially morphed into a dangerous Wild West that needs a governor.
Such technology, supporters of the letter say, could be used to create disinformation, including by U.S. adversaries who want to cause chaos stateside. Odlum pointed to AI technology such as Dall-e 2, which can create realistic images depicting a phony arrest of former President Trump or President Biden kneeling to Chinese President Xi Jinping.
"It's clearly fake, but it looks photorealistic. So the average American would see that and freak out," Odlum told Fox News Digital.
I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE'S WHAT IT HAD TO SAY THAT GAVE ME CHILLS
University of Pennsylvania professor of Medical Ethics and Health Policy, Jonathan D. Moreno, described to Fox News Digital he has similar concerns.
"This specific danger at the moment is our inability to know with confidence whether an AI platform has created a document or even an image - a moving image or a stationary image. We don't know what the system is doing," he said.
Currently, the U.S. has a handful of bills in Congress on AI, while some states have also tried to tackle the issue. However, the lack of hard-set rules has reportedly left some consumers and corporations in a confusing limbo, which is why Odlum is calling for the highest echelons of government to roll out uniform regulations.
"The White House does have an AI research office, and they have released what they called an AI Bill of Rights. Which called for the tech industry to develop AI responsibly and to protect data and to make sure algorithms aren't discriminatory," Odlum said, adding the document is "a useful starting point."
CHATGPT NEW ANTI-CHEATING TECHNOLOGY INSTEAD CAN HELP STUDENTS FOOL TEACHERS
AI labs that create technology that could be used by bad actors for disinformation or chaos do not currently face consequences for violating guides put forth by the White House or government agencies. To create these rules, the government needs to act swiftly, the retired diplomat said.
"Legislation and regulations take time, moving at bureaucratic speed, while generative AI is evolving at exponential speed. That's why I support the call for a 6-month pause in further developments, to allow the government time to examine the risks and engage the technology industry and civil society in a collaborative way to produce laws and regulations, safety measures and guardrails, to make sure that generative AI is not used by adversaries to create disinformation that divides us any further," Odlum said.
"It's not enough for one company to decide what the rules are, and not have a public conversation about it, try to get a sense of how to prevent bad actors. Although this horse may be out of the barn already."
Moreno told Fox News Digital that "there's really no review at all" regarding researchers’ work to make computers smarter, saying it is "something that I think we've kind of let go of without asking industry to do a little more public consideration."
Moreno has written about AI extensively in recent years, highlighting the question of regulating the industry back in 2019.
"There is a great deal of regulation concerning biological experiments that could inadvertently create a ‘smart’ laboratory animal—like putting human-sourced neurons into a non-human primate embryo—but none concerning engineering developments that could lead to the singularity," Moreno wrote at the time in The Regulatory Review.
"Singularity" in this context is defined as when a computer reaches superhuman intelligence, and was coined by mathematician Vernor Vinge 30 years ago.
ARTIFICIAL INTELLIGENCE 'GODFATHER' ON AI POSSIBLY WIPING OUT HUMANITY: ‘IT'S NOT INCONCEIVABLE’
"Should some agency like the U.S. Consumer Product Safety Commission be empowered to verify that the standards are being administered? By the time the singularity has been achieved, a recall may be beside the point," Moreno wrote.
He warned, "At that point, in the words of the Borg in "Star Trek,"'resistance is futile.'"
Fast-forward to 2023 when AI has become "human-competitive at general tasks," according to the letter. Moreno said he wishes he were "optimistic" about creating rules on AI that would be industry-wide.
CLICK HERE TO GET THE FOX NEWS APP
"Am I optimistic that we can actually create some rules that would be industry-wide? I wish I were. But I think at least, It's not enough for one company to decide what the rules are, and not have a public conversation about it, try to get a sense of how to prevent bad actors. Although this horse may be out of the barn already."