Artificial intelligence-generated text can appear more human on social media than text written by actual humans, a study found.
Chatbots, such as OpenAI’s wildly popular ChatGPT, are able to convincingly mimic human conversation based on prompts it is given by users. The platform exploded in use last year and served as a watershed moment for artificial intelligence, handing the public easy access to converse with a bot that can help with school or work assignments and even come up with dinner recipes.
Researchers behind a study published in the scientific journal Science Advances, which is supported by the American Association for the Advancement of Science, were intrigued by OpenAI’s text generator GPT-3 back in 2020 and worked to uncover whether humans "can distinguish disinformation from accurate information, structured in the form of tweets," and determine whether the tweet was written by a human or AI.
One of the study’s authors, Federico Germani of the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, said the "most surprising" finding was how humans more likely labeled AI-generated tweets as human-generated than tweets actually crafted by humans, according to PsyPost.
HUMANS STUMPED ON DIFFERENCE BETWEEN REAL OR AI-GENERATED IMAGES: STUDY
"The most surprising discovery was that participants often perceived information produced by AI as more likely to come from a human, more often than information produced by an actual person. This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person, which is a fascinating side finding of our study," Germani said.
With the rapid increase of chatbot use, tech experts and Silicon Valley leaders have sounded the alarm on how artificial intelligence can spiral out of control and perhaps even lead to the end of civilization. One of the top concerns echoed by experts is how AI could lead to disinformation to spread across the internet and convince humans of something that is not true.
OPENAI CHIEF ALTMAN DESCRIBED WHAT ‘SCARY’ AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES
Researchers for the study, titled "AI model GPT-3 (dis)informs us better than humans," worked to investigate "how AI influences the information landscape and how people perceive and interact with information and misinformation," Germani told PsyPost.
The researchers found 11 topics they found were often prone to disinformation, such as 5G technology and the COVID-19 pandemic, and created both false and true tweets generated by GPT-3, as well as false and true tweets written by humans.
They then gathered 697 participants from countries such as the U.S., UK, Ireland, and Canada to take part in a survey. The participants were presented with the tweets and asked to determine if they contained accurate or inaccurate information, and if they were AI-generated or organically crafted by a human.
"Our study emphasizes the challenge of differentiating between information generated by AI and that created by humans. It highlights the importance of critically evaluating the information we receive and placing trust in reliable sources. Additionally, I would encourage individuals to familiarize themselves with these emerging technologies to grasp their potential, both positive and negative," Germani said of the study.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
Researchers found participants were best at determining disinformation crafted by a fellow human than disinformation written by GPT-3.
"One noteworthy finding was that disinformation generated by AI was more convincing than that produced by humans," Germani said.
The participants were also more likely to recognize tweets containing accurate information that were AI-generated than accurate tweets written by humans.
The study noted that in addition to its "most surprising" finding that humans often can’t differentiate between AI-generated tweets and human-created ones, their confidence in making a determination fell while taking the survey.
"Our results indicate that not only can humans not differentiate between synthetic text and organic text but also their confidence in their ability to do so also significantly decreases after attempting to recognize their different origins," the study states.
The researchers said this is likely due to how convincingly GPT-3 can mimic humans, or respondents may have underestimated the intelligence of the AI system to mimic humans.
"We propose that, when individuals are faced with a large amount of information, they may feel overwhelmed and give up on trying to evaluate it critically. As a result, they may be less likely to attempt to distinguish between synthetic and organic tweets, leading to a decrease in their confidence in identifying synthetic tweets," the researchers wrote in the study.
The researchers noted that the system sometimes refused to generate disinformation, but also sometimes generated false information when told to create a tweet containing accurate information.
CLICK HERE TO GET THE FOX NEWS APP
"While it raises concerns about the effectiveness of AI in generating persuasive disinformation, we have yet to fully understand the real-world implications," Germani told PsyPost. "Addressing this requires conducting larger-scale studies on social media platforms to observe how people interact with AI-generated information and how these interactions influence behavior and adherence to recommendations for individual and public health."