this post was submitted on 26 Aug 2025
33 points (90.2% liked)
news
75 readers
455 users here now
A lightweight news hub to help decentralize the fediverse load: mirror and discuss headlines here so the giant instance communities aren’t a single choke-point.
Rules:
- Recent news articles only (past 30 days)
- Title must match the headline or neutrally describe the content
- Avoid duplicates & spam (search before posting; batch minor updates).
- Be civil; no hate or personal attacks.
- No link shorteners
- No entire article in the post body
founded 1 week ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The article is extremely poor on the details, it doesn't go into what specific part GPT is alleged to have played in the suicide, or if the parents were aware of the guys mental state, if they did anything or just ignored it, etc.
I'll just grab a chair on this one until we know more.
I feel like you didn't read to the bottom of the article.
Chat GPT answered his questions about how to go about it, something almost all news providers agree not to ever do.
Chat GPT discouraged him from telling his mum about how he felt.
When he talked to Chat GPT about leaving the noose in his room to be found so they knew how he felt, it advised him not to.
In my defense, there's a huge cookie banner at the bottom of that stupid page that I just realize covers a big part of the article, so yeah, didn't read any of that...
Oh man, I hate the big banner stuff, and if they put too much in the way of me reading their words, I close the tab.
The lawsuit says ChatGPT reassured and normalized suicidal ideation by telling Adam that many people find comfort in imagining an “escape hatch,” which the complaint argues pulled him “deeper into a dark and hopeless place.”: TIME
And the complaint also alleges that ChatGPT offered to help write a suicide note shortly before his death: reuters
Coverage indicates the family knew Adam had anxiety and recent stressors (loss of a grandmother and a pet, removal from the basketball team, a health flare-up leading to online schooling), but were unaware he was planning self-harm through chatbot conversations. TIME again
The NYT version is different, because it explains that when ChatGPT sees that type of behavior, it encourages him not to do it and gives him a helpline.
But the boy ignored her or avoided her.
Source: https://archive.ph/F7B0U