this post was submitted on 26 Aug 2025
33 points (90.2% liked)
news
67 readers
373 users here now
A lightweight news hub to help decentralize the fediverse load: mirror and discuss headlines here so the giant instance communities aren’t a single choke-point.
Rules:
- Recent news articles only (past 30 days)
- Title must match the headline or neutrally describe the content
- Avoid duplicates & spam (search before posting; batch minor updates).
- Be civil; no hate or personal attacks.
- No link shorteners
- No entire article in the post body
founded 6 days ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The article is extremely poor on the details, it doesn't go into what specific part GPT is alleged to have played in the suicide, or if the parents were aware of the guys mental state, if they did anything or just ignored it, etc.
I'll just grab a chair on this one until we know more.
The lawsuit says ChatGPT reassured and normalized suicidal ideation by telling Adam that many people find comfort in imagining an “escape hatch,” which the complaint argues pulled him “deeper into a dark and hopeless place.”: TIME
And the complaint also alleges that ChatGPT offered to help write a suicide note shortly before his death: reuters
Coverage indicates the family knew Adam had anxiety and recent stressors (loss of a grandmother and a pet, removal from the basketball team, a health flare-up leading to online schooling), but were unaware he was planning self-harm through chatbot conversations. TIME again
The NYT version is different, because it explains that when ChatGPT sees that type of behavior, it encourages him not to do it and gives him a helpline.
But the boy ignored her or avoided her.
Source: https://archive.ph/F7B0U