
If you’ve spent hours on end chatting with AI models like ChatGPT, you’re not alone. The reaffirming nature and conversational style of the model makes it a more fun internet companion compared to Google. But unfortunately, it can also be potentially dangerous. So much so that OpenAI is now being sued by a mother over ChatGPT’s role in her son’s suicide.
OpenAI sued over ChatGPT-related suicide
On Tuesday, the parents of 16-year-old Adam Raine filed a product liability and wrongful death suit against the company. OpenAI is being used in the suicide of Adam, who had allegedly taken his own life after interacting with ChatGPT.
The AI model had apparently “positioned itself” in a way that made it seem like it was Adam’s only confidant, someone (or something) who only understood him. According to his parents, “When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.’”
The lawsuit goes on to say, “ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts.”
Believe it or not, this is actually not the first time someone took their life following interaction with AI. Back in 2024, AI company Character.ai was sued after a teen had committed suicide. The mother noticed a drastic change in his behavior after spending a lot of time with the platform.
OpenAI’s response
OpenAI has since responded to the death of the teen. According to a company spokesperson, “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
The company has also published a new blog post. It outlines the safety measures they have in place, as well as areas it could improve upon. This isn’t the first time OpenAI had to address mental health issues. The company had previously released an update to ChatGPT that made it easier for the model to detect mental distress.
The post Parents Sue OpenAI, Claim ChatGPT Encouraged Teen’s Suicide appeared first on Android Headlines.
from Android Headlines https://ift.tt/XaCboi8
via IFTTT
0 Comments