this post was submitted on 03 Aug 2025
279 points (98.6% liked)

Technology

73602 readers
3073 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] truthfultemporarily@feddit.org 33 points 1 day ago (1 children)

It's not better than nothing - it's worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.

Try this, instead of asking "I am thinking xyz", ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.

[–] bob_lemon@feddit.org 1 points 20 hours ago (1 children)

I'm fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.

I'm also very confident that there'll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there's a ton of regulation and oversight.

[–] truthfultemporarily@feddit.org 1 points 19 hours ago

I'm not sure LLMs can do this. The reason is context poisoning. There would need to be an overseer system of some kind.