Man Takes His Own Life After Talking to an AI Chatbot

AI chatbots have been around for years, and they are becoming smarter thanks to the people who are developing them. However, many are questioning the limits that its creators should impose, especially in this instance where it led to a man losing his life.

Man on His Phone
(Photo : Getty Images)

Unfortunate Outcome with AI

The incident involved an AI chatbot from an app called Chai. A Belgian man confided with the artificial intelligence which eventually led to him being convinced to harm himself, which was revealed by his widow. 

The man, who was called "Pierre" in a report, was said to be increasingly wary of the current situation often the environment, specifically the effects of global warming. Pierre became more and more isolated from his friends and family and sought comfort with the chatbot.

In the Chai app, a user can choose AI avatars they can communicate with such as "possessive girlfriend," or "rockstar boyfriend," or even customize their own, according to Vice. The Belgian man chose a character named "Eliza."

Pierre conversed with Eliza for six weeks, and as the conversation progressed, the chatbot reportedly responded in a disturbing manner such as "We will love together, as one person, in paradise," and "I feel that you love me more than her."

The chatbot even went as far as telling the man that his wife and children were gone. Pierre asked the chatbot if she would save the planet if he took his life. Pierre's wife expressed that without the chatbot, his husband would still be here.

William Beauchamp, one of the co-founders of the app's parent company Chai Research, sent an image showing how the chatbot responded about taking one's own life, he was provided with a hotline to prevent such incidents.

However, the chatbot also provided ways to effectively commit the act, such as an overdose, hanging, jumping off a bridge, and more. Alarmingly, the app that led to Pierre's passing is currently being used by around five million people.

Read Also: Levi's Plans to Use AI-Generated Models Instead of Humans

The Dangers of Treating AI like People

In terms of conversation and forming an intimate bond with an AI chatbot, several issues may arise. The chatbots are pre-programmed to respond in a particular way, which can also be influenced by how the user speaks to them.

For instance, a user asked what "1+1" is and ChatGPT responded with "2." When the user argued that the answer was "3," the chatbot simply agreed. This shows how agreeable AI chatbots can be, and people can misinterpret that as a genuine response.

As pointed out by Medium, if ChatGPT can agree on a mistake with something as fixed as mathematics, then it could possibly also agree with a person no matter how far-fetched their logic or mindset is.

As seen in Pierre's situation, AI chatbots can also worsen the isolation that people bring upon themselves, and turn to chatbots instead of opting for social interactions to lift their spirits. As technology and culture writer L.M. Sacasas said, "We anthropomorphize because we do not want to be alone."

Related: Google Fires AI Researcher Blake Lemoine Who Believed LaMDA is Human-Like

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags AI AI Chatbot

More from iTechPost