this post was submitted on 29 Oct 2025
80 points (100.0% liked)

Technology

40629 readers
372 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

From the maybe-we-should-have-done-that-to-start dept:

The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

you are viewing a single comment's thread
view the rest of the comments
[–] GammaGames@beehaw.org 6 points 6 days ago* (last edited 6 days ago) (1 children)

Drop the wild speculation, there is zero reason to play devil’s advocate. If you cared to do any reading there are a myriad of examples of this company’s llms pushing harmful behavior.

Yes, there are probably other factors. There always are. It might not be what you meant, but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

[–] thingsiplay@beehaw.org 3 points 6 days ago (1 children)

but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

I am not saying that. Did you not read the last part of my reply?

[–] GammaGames@beehaw.org 2 points 5 days ago (1 children)

I did, it was full of speculation based on something you admitted you had no idea about

[–] thingsiplay@beehaw.org 4 points 5 days ago* (last edited 5 days ago) (1 children)

It is not just speculation, it is a warning not to just believe alleged accusations. We saw this a lot of times with politicians too too, while pointing to Ai to hide their real problems. So I ask you, have you prove that all of the accusations are true that the kid died because of the Ai and there the kid had no suicidal problems before?

But yes, its easy to say "you have no clue" instead of coming up with facts. Its easy this way to point with the finger and believe what you want to believe. Plus I said if its true at all, then I am for regulation. You instead ignore all of my points and say "you have no clue". I wonder if you have any clue what you are talking about.

Edit: And then you put stuff in my mouth I did not say at all. Just delusional. Believe what you want then and ignore real problems. Not worth my time here.

[–] GammaGames@beehaw.org 1 points 5 days ago

Apologies for replying so strongly 👍