This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
A man in Belgium reportedly died by suicide after messaging with an AI chatbot about climate change, according to the man’s widow.
"Without Eliza [the chatbot], he would still be here," the widow, whose real name was not used in the story, told Belgian outlet La Libre.
The man, identified by the outlet under the fake name of Pierre, reportedly became obsessed and pessimistic about climate change and began messaging with a chatbot on an app called Chai. Pierre, reportedly in his 30s at the time of his suicide, worked as a health researcher and shared two children with his wife.
Pierre’s widow showed La Libre messages between the man and the bot, known as "Eliza," which allegedly showed that the husband appeared to treat the bot as a human and that their messages became more alarming over the course of six weeks before his death.
TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE
When the man asked the bot about his children, "Eliza" allegedly responded that they were "dead" and appeared possessive when he asked the chatbot about his wife: "I feel that you love me more than her."
In another message, the bot allegedly told the man that they would "live together, as one person, in paradise."
"If you wanted to die, why didn’t you do it sooner?" the bot allegedly asked the man just before his suicide.
HOW AN AI CHATBOT ALLEGEDLY HELPED STUDENT TERMINATE PARKING FINE: 'VERY RELIEVED'
"I was probably not ready," the man responded, with the bot then allegedly asking, "Were you thinking of me when you had the overdose?"
"Obviously," the man reportedly responded.
The widow said that her husband had "eco-anxiety" and turned to the bot for "a breath of fresh air."
"Eliza answered all his questions," the wife said. "She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without."
TECH GIANT SAM ALTMAN COMPARES POWERFUL AI RESEARCH TO DAWN OF NUCLEAR WARFARE: REPORT
The bot is powered by a language model from Chai Research, Vice reported, citing Chai Research co-founders William Beauchamp and Thomas Rianlan. Beauchamp told the outlet that the app currently has 5 million users and was trained with the "largest conversational dataset in the world."
"The second we heard about this [suicide], we worked around the clock to get this feature implemented," Beauchamp told Vice about a new safety feature. "So, now when anyone discusses something that could be not safe, we’re going to be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms," he added.
The widow described to La Libre that she is "convinced" the conversations between her husband and the bot helped facilitate his death, arguing that the bot encouraged him to commit suicide in a bid to save the planet.
ELON MUSK, CRITICS OF 'WOKE' AI TECH SET OUT TO CREATE THEIR OWN CHATBOTS
"When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming," the widow said. "He placed all his hopes in technology and artificial intelligence to get out of it."
CLICK HERE TO GET THE FOX NEWS APP
The incident comes amid thousands of tech and academic experts signing an open letter published last week calling for a pause on AI research at labs so policymakers and tech leaders can create safety protocols. Specifically, the letter calls for a pause on programs trained to be more powerful than GPT-4, OpenAI’s latest deep-learning model that has become wildly popular among users.