this post was submitted on 27 Mar 2026
347 points (96.5% liked)

Technology

83251 readers
3900 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FosterMolasses@leminal.space 14 points 3 days ago (1 children)

This explains a lot, honestly.

Everyone keeps telling me how "addictive" and "convincing" and "personal feeling" ChatGPT is.

Meanwhile, I'm over here like

"Can you stop saying skrrrt after every sentence while I'm trying to research a serious topic, it's annoying"

"Understood, skrrrt 💥🌴🚗💨"

[–] SnotFlickerman@lemmy.blahaj.zone 210 points 4 days ago (41 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] XLE@piefed.social 57 points 4 days ago (1 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[–] sukhmel@programming.dev 19 points 4 days ago (1 children)

If that's only 19 users, that's around 250 conversations per user 🤔

[–] SnotFlickerman@lemmy.blahaj.zone 9 points 4 days ago (1 children)

...and about 82 messages per conversation. Also, at least half of all the messages are from the user to the AI, and the other half are from the AI to the user, meaning around 41 messages from the user per conversation.

load more comments (1 replies)
[–] InternetCitizen2@lemmy.world 26 points 4 days ago (1 children)

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[–] Buddahriffic@lemmy.world 2 points 2 days ago

It's about 300 samples for an estimate of the distribution with a 95% confidence iirc. That's assuming the samples are representative (unbiased) and 95% confidence doesn't mean it's within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there's no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).

[–] A_norny_mousse@piefed.zip 12 points 4 days ago

Thanks, you saved me a click 😐

load more comments (38 replies)
[–] ExLisper@lemmy.curiana.net 41 points 4 days ago (10 children)

I think what we're seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can't digest it and get sick. The problem is there's no way to determine who can handle AI and who can't.

When I'm reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought "wow, this thing feels so real". Some people clearly have predisposition to jumping over the "it's a tool" reaction straight to "it's a conscious thing I can connect with". I think next step should be developing a test that can predict how someone will react to it.

[–] wonderingwanderer@sopuli.xyz 16 points 3 days ago (4 children)

I suspect that the difference is to no small degree correlated with a person's isolation/social-integration.

People who aren't socially integrated have always been more vulnerable to predatory cults and scams. It's because human interactions is a psychological need that's been hardcoded into us by evolution.

Some people say "I don't need human interaction, I enjoy my time alone!" But that's because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It's well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person's well-being.

When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That's why solitary confinement is considered a method of torture in some circumstances, when it's used to depersonalize and destroy a person's sense of self-identity.

So that's why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as "just a tool."

But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I'd say it's predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.

It might be superficial, but it's a way of patching a hole. It's an expedient means to fulfill a need that they're not getting from anywhere else.

If we don't want this sort of stuff happening to people, then maybe we shouldn't ostracize them for being "weird" in the first place. Because nobody learns how to be "normal" by being alone all the time.

[–] ChunkMcHorkle@lemmy.world 6 points 3 days ago (1 children)

This is really good. Thank you for taking the time to write it.

[–] wonderingwanderer@sopuli.xyz 9 points 3 days ago (4 children)

Thank you for understanding. So many times when I discuss things that are adjacent to this topic, I get flamed in the comments with people accusing me of being some sort of redpiller from the manosphere.

Like, no, social isolation is a problem, and it's getting worse due to a variety of factors. To name a few, there's social media algorithms designed to keep people dependent on their phones; there's the long-standing consequences of the pandemic and the collective trauma that had in addition to the atrophied social skills due to quarantine; there's widespread political polarization which keeps tensions high and makes it difficult to navigate new situations if you can't prove you know the right social scripts and avoid any faux pas; there's the whole toxic influencer culture who are grifting on inflammatory rhetoric, ragebait content, exploiting people's vulnerabilities, and radicalizing them (which is a vicious cycle, because they prey on people who are already isolated!); and that's just to name a few!

But if I summarize all that as a "loneliness epidemic," then people call me an incel and act like I'm trying to coerce women into having sex with me simply by acknowledging the fact that social interaction is a deeply-set human psychological need.

Like, using "incel" as an insult is part of the problem. It feeds into this culture where "if you're a man, you must get laid, or else you're worthless." That's literally promoting toxic masculinity!

And it forces these people who are already isolated and vulnerable to go identify with these groups of similarly ostracized people in echo chambers where they're insulated from those insults, where those predatory "influencers" then have fresh pickings of new losers to neg and radicalize.

But somehow, if I point out the problem here (because how can we solve a problem if we can't talk about it?), then to most people's view that makes me part of the problem! Even though, why would I be calling out the pattern if it was something I identify with?

The people radicalizing these vulnerable "losers," yes they should be torched. But the vulnerable "losers" being radicalized need to be treated with compassion if they're ever going to be redeemed. It should be pretty easy to identify who's who, seeing as they have an entire social structure based on hierarchies of dominance and submission...

load more comments (4 replies)
[–] FosterMolasses@leminal.space 2 points 3 days ago

Some people say “I don’t need human interaction, I enjoy my time alone!” But that’s because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone.

Oh shit, I was about to contest this logic but no you're absolutely right.

I'm definitely one of those people, but I also have never gone seeking validation online and missed the bus for a lot of those social media trends (outside of the ones that friends at the time ended up bullying me into using, which I would briefly check out then quickly bail on).

Maybe it is more an issue of identity and self-image, as I've never once felt emotionally "connected" to ChatGPT or any of these clunky LLMs people seem to swear by. I do admit that sometimes talking out a problem instead of marinating on it on your own can be useful... but I view it almost as an extension of journaling than anything else. There's always a clear line for me where it's like "Okay, I got what I needed out of this interaction and now it's clearly suggesting additional prompts I don't need to try and keep me engaging with it"...

It's crazy to me that other people don't seem to register that line at all lol, it seems so clearly artificial to me.

load more comments (2 replies)
[–] thedeadwalking4242@lemmy.world 15 points 4 days ago (5 children)

I bet it's probably correlated with low education as most things

load more comments (5 replies)
[–] Tiresia@slrpnk.net 12 points 4 days ago (6 children)

Cults and toxic self-help literature have existed before LLMs copied them. I don't know if LLMs are getting people who couldn't have been gotten by human scammers.

Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.

People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.

load more comments (6 replies)
[–] baaaaaah@hilariouschaos.com 7 points 4 days ago (3 children)

Surprisingly, the people who have that issues with it aren't the ones who contact to it emotionally, it's the people who offload their decision making to AI 

It's more like a codependence spiral than anything else

[–] FosterMolasses@leminal.space 2 points 3 days ago

Ah, that's a fascinating possible factor. That's also basically the one thing I never use LLMs for lol

If you're not even making your own decisions, then what's the point? Isn't life deterministic enough as it is lmao

load more comments (2 replies)
[–] FosterMolasses@leminal.space 2 points 3 days ago

I think what we’re seeing is similar to lactose intolerance

For real. Good to know I can handle both my cheese and shitty LLM bots without bodily consequences lmao

load more comments (5 replies)
[–] amgine@lemmy.world 46 points 4 days ago (6 children)

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

[–] Tollana1234567@lemmy.today 14 points 4 days ago

its like the AI BF/GFs the subs are posting about.

load more comments (5 replies)
[–] givesomefucks@lemmy.world 39 points 4 days ago (7 children)

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

There's a certain irony in all the alright techbros really just wanting to be told they were "stunning and brave" this whole time.

load more comments (7 replies)
[–] HeyThisIsntTheYMCA@lemmy.world 8 points 3 days ago* (last edited 3 days ago) (1 children)

okay how many of these "delusional" people in the study are making fun of the LLM tho

i don't know because I don't use the LLM i only see the screenshots. I am the control group. kinda. my nut is already off.

[–] MangoCats@feddit.it 4 points 3 days ago (2 children)

What I read in the first lines of the article is: "they go down the rabbit hole, just like social media echo chambers..." which are filled with bots and trolls, and have been for years - and that's the dataset that a lot of chatbots are trained on.

load more comments (2 replies)
[–] Hackworth@piefed.ca 15 points 4 days ago* (last edited 4 days ago) (1 children)

Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn't been implemented in any models, I assume because of the cost of scaling it up.

[–] porcoesphino@mander.xyz 14 points 4 days ago* (last edited 4 days ago) (6 children)

When you talk to a large language model, you can think of yourself as talking to a character

But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don't fully know

Fuck me that's some terrifying anthropomorphising for a stochastic parrot

The study could also be summarised as "we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?". They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

To be fair, I'm only about 1/3rd of the way through and struggling to continue reading it so I haven't got to the interesting research but the intro is, I think, terrible

load more comments (6 replies)
load more comments
view more: next ›