this post was submitted on 11 Feb 2025
110 points (97.4% liked)
[Migrated, see pinned post] Casual Conversation
3363 readers
1 users here now
We moved to !casualconversation@piefed.social please look for https://lemm.ee/post/66060114 in your instance search bar
Share a story, ask a question, or start a conversation about (almost) anything you desire. Maybe you'll make some friends in the process.
RULES
- Be respectful: no harassment, hate speech, bigotry, and/or trolling.
- Encourage conversation in your OP. This means including heavily implicative subject matter when you can and also engaging in your thread when possible.
- Avoid controversial topics (e.g. politics or societal debates).
- Stay calm: Don’t post angry or to vent or complain. We are a place where everyone can forget about their everyday or not so everyday worries for a moment. Venting, complaining, or posting from a place of anger or resentment doesn't fit the atmosphere we try to foster at all. Feel free to post those on !goodoffmychest@lemmy.world
- Keep it clean and SFW
- No solicitation such as ads, promotional content, spam, surveys etc.
Casual conversation communities:
Related discussion-focused communities
- !actual_discussion@lemmy.ca
- !askmenover30@lemm.ee
- !dads@feddit.uk
- !letstalkaboutgames@feddit.uk
- !movies@lemm.ee
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To preface, I don't know a whole lot about AI bots. But we already see posts of the limitations of what AI can do/will allow, like bots refusing to repeat a given phrase. But what about actual critical thinking? If most bots are trained off human behavior, and most people don't run on logical arguments, doesn't that create a gap?
Not that it's impossible to program such a bot, and again, my knowledge on this is limited, but it doesn't seem like the aim of current LLMs is to apply critical thought to arguments. They can repeat what others have said, or mix words around to recreate something similar to what others have said, but are there any bots actively questioning anything?
If there are bots that question societal narratives, they risk being unpopular amongst both the ruling class and the masses that interact with them. As long as those that design and push for AI do so with an aim of gaining popular traction, they will probably act like most humans do and "not rock the boat."
If the AI we interact with were instead to push critical thinking, without applying the biases that constrain people from applying it perfectly, that'd be awesome. I'd love to see logic bots that take part in arguments on the side of reason - it's something a bot could do all day, but a human can only do for so long.
Which is why when I see a comment that argues a cogent point against a popular narrative, I am more likely to believe they are human. For now.