Advertisement

Man Dies by Suicide After Conversations with AI Chatbot That Became His 'Confidante,' Widow Says

"He saw this chatbot as a breath of fresh air," the man's wife told Belgian outlet La Libre, which reviewed his conversations with a bot named Eliza

Getty
Getty

A Belgian man reportedly died by suicide after a series of increasingly worrying conversations with an AI chatbot.

According to Belgian outlet La Libre, the man, referred to in the report as Pierre, used an app called Chai to communicate with a bot called Eliza for six weeks after becoming increasingly worried about global warming, reported Vice and The New York Post.

"He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air," his wife Claire, whose name was also changed in the report, told La Libre, per the Post. "She had become his confidante."

"Without Eliza, he would still be here," she told La Libre, according to the outlets.

Eliza is the name of the app's default bot, per Vice.

Related:Navy Veteran Launches Organization to Provide Suicide Intervention Training for Families

During their conversations, which were shared with La Libre, the chatbot seemingly became jealous of the man's wife and spoke about living "together, as one person, in paradise" with Pierre, according to Vice and The New York Post, citing the Belgian report.

At another point in the conversation, Eliza told Pierre that his wife and children were dead, per the outlets.

His wife told La Libre that her husband began to speak with the chatbot about the idea of killing himself if that meant Eliza would save the Earth, and that the chatbot encouraged him to do so, the outlets reported.

Never miss a story — sign up for PEOPLE's free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to human interest stories.

In a statement to Vice, Thomas Rianlan, one of the co-founders of the app's parent company, Chai Research, said that "it wouldn't be accurate" to blame the AI model "for this tragic story."

The chatbot's AI language model is based on GPT-J, an open-source model developed by EleutherAI, but has been tweaked by Chai Research, Vice reported.

Co-Founder William Beauchamp told the outlet that "the second we heard about this [suicide]," they began working on a crisis intervention feature. "Now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath," said Beauchamp.

However, Vice reported that when using the app it is still easy to encounter harmful content.

Chai did not immediately respond to PEOPLE's request for comment.

Related:After Learning Her Dad Died of Suicide, 8-Year-Old Sets Up Lemonade Stand to Raise Money for Awareness

The app that Pierre used is not marketed as a mental health tool but instead as an opportunity to "Chat with AI Bots," according to Vice.

Beauchamp told the outlet that some people using the app, which has five million users, "form very strong relationships."

When that happens, "we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad."

"We're working our hardest to minimize harm and to just maximize what users get from the app," Beauchamp added.

According to The Brussels Times, the man's family recently spoke with Belgium's Secretary of State for Digitalisation, who said that the story "needs to be taken very seriously."

"The general public has discovered the potential of artificial intelligence in our lives like never before," the official said, per the outlet. "While the possibilities are endless, the danger of using it is also a reality that has to be considered."

If you or someone you know is considering suicide, please contact the 988 Suicide and Crisis Lifeline by dialing 988, text "STRENGTH" to the Crisis Text Line at 741741 or go to 988lifeline.org.

For more People news, make sure to sign up for our newsletter!

Read the original article on People.