this post was submitted on 28 Oct 2025
384 points (97.5% liked)

Technology

76581 readers
2513 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lorski@sopuli.xyz 26 points 6 days ago

apparently ai is not very private lol

[–] Emilien@lemmy.world 13 points 6 days ago

There's so many people alone or depressed and ChatGPT is the only way for them to "talk" to "someone"... It's really sad...

[–] tehn00bi@lemmy.world 11 points 6 days ago

Bet some of them lost, or about to lose, their job to ai

[–] SabinStargem@lemmy.today 14 points 6 days ago (1 children)

Honestly, it ain't AI's fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn't done to make them feel good about life.

[–] KelvarCherry@lemmy.blahaj.zone 9 points 6 days ago (2 children)

Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there's a much bigger issue here.

Speaking from the USA, "suicidal ideation" is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It's so absurdly stupid I've just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.

Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.

I absolutely wish that I could /s this.

[–] chunes@lemmy.world 2 points 5 days ago

It's corporatized because we are just corporate livestock. Can't pay taxes and buy from corpos if we're dead

Im so done with ChatGPT. This AI boom is so fucked.

[–] stretch2m 13 points 6 days ago

Sam Altman is a horrible person. He loves to present himself as relatable "aw shucks let's all be pragmatic about AI" with his fake-ass vocal fry, but he's a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.

[–] markovs_gun@lemmy.world 10 points 6 days ago* (last edited 6 days ago)

"Hey ChatGPT I want to kill myself."

"That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

✅ Pros of committing suicide

  1. Ends pain and suffering.

  2. Eliminates the burden you are placing on your loved ones.

  3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

❎ Cons of committing suicide

  1. Committing suicide will make your friends and family sad.

  2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.

  3. You can't undo it. If you commit suicide, it is irreversible and you will not be able to go back

Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."

[–] Fizz@lemmy.nz 5 points 6 days ago (2 children)

1m out of 500m is way less than i would have guessed. I would have pegged it at like 25%

[–] markko@lemmy.world 4 points 6 days ago

I think the majority of people use it to (unreliably) solve tedious problems or spit out a whole bunch of text that they can't be bothered to write.

While ChatGPT has been intentionally designed to be as friendly and conversational as possible, I hope most people do not see it as something to have a meaningful conversation with instead of as just a tool that can talk.

Anecdotally, whenever I see someone mention using ChatGPT as part of their decision-making process it is usually taken less seriously, if not outright laughed at.

[–] Buddahriffic@lemmy.world 1 points 6 days ago (1 children)

You think a quarter of people are suidical or contemplating it to the point of talking about it with an AI?

[–] Fizz@lemmy.nz 1 points 6 days ago

Yeah seems like everyone is constantly talking about suicide its very normalised. You dont really find people these days who havent contemplated suicide.

I would guess all or even most of the people talking about suicide with an AI arent serious. Heat of the moment venting is what I'd expect most of the ai suicide chats to be. Which is why I thought the amount would be significantly higher.

[–] Fmstrat@lemmy.world 5 points 6 days ago

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

I don't particularly like OpenAI, and i know they wouldn't release the affected persons numbers (not quoted, but discussed ib the linked article) if percentages were not improving, but cudos to whomever is there tracking this data and lobbying internally to become more transparent about it.

[–] vane@lemmy.world 2 points 5 days ago* (last edited 5 days ago)

Charles Manson would have been happy seeing OpenAI cult evolve.

[–] Scolding7300@lemmy.world 221 points 1 week ago (7 children)

A reminder that these chats are being monitored

[–] whiwake@sh.itjust.works 71 points 1 week ago (13 children)

Still, what are they gonna do to a million suicidal people besides ignore them entirely

[–] Jhuskindle@lemmy.world 1 points 5 days ago

I feel like if thats 1 mill peeps wanting to die... They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.

[–] WhatAmLemmy@lemmy.world 38 points 1 week ago (21 children)

Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

[–] scarabic@lemmy.world 2 points 6 days ago

Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:

  1. In some cases I think people will talk to a soulless robot more freely than to a human professional.

  2. Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.

load more comments (20 replies)
load more comments (11 replies)
[–] dhhyfddehhfyy4673@fedia.io 30 points 1 week ago (3 children)

Absolutely blows my mind that people attach their real life identity to these things.

load more comments (3 replies)
load more comments (5 replies)
[–] Zwuzelmaus@feddit.org 55 points 1 week ago (3 children)

over a million people talk to ChatGPT about suicide

But it still resists. Too bad.

load more comments (3 replies)
[–] Alphane_Moon@lemmy.world 47 points 1 week ago (1 children)

I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.

[–] Perspectivist@feddit.uk 33 points 1 week ago (1 children)
lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))  
lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))  
lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))

Add those to your adblocker custom filters.

load more comments (1 replies)
[–] mhague@lemmy.world 20 points 1 week ago (6 children)

I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn't open up and read a million chats... they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

[–] scarabic@lemmy.world 1 points 6 days ago (1 children)

You don’t have to read far into the article to reach this:

The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.”

It doesn’t unpack their analysis method but this does sound a lot more specific than just counting all sessions that mention the word suicide, including chats about that band.

[–] mhague@lemmy.world 1 points 2 days ago

Assume I read the article and then made a post.

load more comments (5 replies)
[–] minorkeys@lemmy.world 19 points 1 week ago (9 children)

And does ChatGPT make the situation better or worse?

load more comments (9 replies)
[–] myfunnyaccountname@lemmy.zip 18 points 1 week ago (5 children)

I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.

[–] scarabic@lemmy.world 1 points 6 days ago

00.15% sound small but if that many people committed suicide monthly, it would wipe out 1% of the US population, or 33 million people, in about half a year.

load more comments (4 replies)
[–] lemmy_acct_id_8647@lemmy.world 17 points 1 week ago* (last edited 1 week ago) (6 children)

I've talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I've low-key wanted to kill myself since I was 8 years old. For me it's just a part of life. For others it's usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

Yeah I don't trust it, but at the same time, for me it's better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.

[–] LengAwaits@lemmy.world 5 points 6 days ago (2 children)

I love this article.

The first time I read it I felt like someone finally understood.

[–] Dreaming_Novaling@lemmy.zip 1 points 6 days ago

Man, I have to stop reading so I don't continue a stream of tears in the middle of a lobby, but I felt every single word of that article in my bones.

I couldn't ever imagine hanging myself or shooting myself, that shit sounds terrifying as hell. But for years now I've had those same exact "what if I just fell down the stairs and broke my neck" or "what if I got hit by a car and died on the site?" thoughts. And similarly, I think of how much of a hassle it'd be for my family, worrying about their wellbeing, my cats, the games and stories I'd never get to see, the places I want to go.

It's hard. I went to therapy for a year and found it useful even if it didn't do much or "fix" me, but I never admitted to her about these thoughts. I think the closest I got to it was talking about being tired often, and crying, but never just outright "I don't want to wake up tomorrow."

I dig this! Thanks for sharing!

load more comments (5 replies)
[–] ChaoticNeutralCzech@feddit.org 16 points 1 week ago* (last edited 1 week ago) (1 children)

The headline has two interpretations and I don't like it.

  • Every week, there is 1M+ users that bring up suicide
    • likely correct
  • There is 1M+ long-term users that bring up suicide at least once every week
    • my first thought
[–] atrielienz@lemmy.world 20 points 1 week ago (4 children)

My first thought was "Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT".

[–] T156@lemmy.world 3 points 6 days ago

That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.

Just in case an errant update or something causes the statistic to suddenly change.

load more comments (3 replies)
[–] QuoVadisHomines@sh.itjust.works 15 points 1 week ago

Sounds like we should shut them down then to prevent a health crisis then.

[–] WhatsHerBucket@lemmy.world 15 points 1 week ago

I mean… it’s been a rough few years

[–] i_stole_ur_taco@lemmy.ca 13 points 1 week ago

They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.

load more comments
view more: next ›