There's so many people alone or depressed and ChatGPT is the only way for them to "talk" to "someone"... It's really sad...
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Bet some of them lost, or about to lose, their job to ai
Charles Manson would have been happy seeing OpenAI cult evolve.
A reminder that these chats are being monitored
Still, what are they gonna do to a million suicidal people besides ignore them entirely
I feel like if thats 1 mill peeps wanting to die... They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.
Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).
Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:
-
In some cases I think people will talk to a soulless robot more freely than to a human professional.
-
Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.
Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is
Absolutely blows my mind that people attach their real life identity to these things.
over a million people talk to ChatGPT about suicide
But it still resists. Too bad.
I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.
lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))
lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))
lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))
Add those to your adblocker custom filters.
apparently ai is not very private lol
I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn't open up and read a million chats... they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.
You don’t have to read far into the article to reach this:
The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.”
It doesn’t unpack their analysis method but this does sound a lot more specific than just counting all sessions that mention the word suicide, including chats about that band.
Assume I read the article and then made a post.
And does ChatGPT make the situation better or worse?
The anti-AI hivemind here will hate me for saying it but I'm willing to bet $100 that this saves a significant number of lives. It's also indicative of how insufficient traditional mental health institutions are.
I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.
00.15% sound small but if that many people committed suicide monthly, it would wipe out 1% of the US population, or 33 million people, in about half a year.
in the US
It’s not just the US, it’s like that in most of the world.
At least in the rest of the world you don't end up with crippling debt when you try to get mental healthcare that stresses you out to the point of committing suicide.
I've talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I've low-key wanted to kill myself since I was 8 years old. For me it's just a part of life. For others it's usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.
Yeah I don't trust it, but at the same time, for me it's better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.
Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, "Yeah, but only in the fun ways", so he checked no. His wife got concerned that he joked about that and asked him what that meant. "Don't worry about it - it's not a problem."
Yeah I learned the hard way that it's easier to lie on those forms when you already are in therapy. I've had GPs try to play psychologist rather than treat the reason I came in. The last time it happened I accused the doctor of being a mechanic who just talked about the car and its history instead of changing the oil as what's hired to do so. She was fired by me in that conversation.
The headline has two interpretations and I don't like it.
- Every week, there is 1M+ users that bring up suicide
- likely correct
- There is 1M+ long-term users that bring up suicide at least once every week
- my first thought
My first thought was "Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT".
Sounds like we should shut them down then to prevent a health crisis then.
I mean… it’s been a rough few years
Honestly, it ain't AI's fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn't done to make them feel good about life.
Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there's a much bigger issue here.
Speaking from the USA, "suicidal ideation" is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It's so absurdly stupid I've just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.
Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.
I absolutely wish that I could /s this.
It's corporatized because we are just corporate livestock. Can't pay taxes and buy from corpos if we're dead
They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.
So they want to play the strategy that they are relevant
Sam Altman is a horrible person. He loves to present himself as relatable "aw shucks let's all be pragmatic about AI" with his fake-ass vocal fry, but he's a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.
Im so done with ChatGPT. This AI boom is so fucked.