this post was submitted on 09 Oct 2025
893 points (99.2% liked)
Microblog Memes
9413 readers
1046 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think this runs into the problem of consciousness as a philosophical question.
Humans have a habit of anthropomorphizing animals and even objects that they interact with routinely. My sister has named every car she's owned, for instance. And she occasionally talks to them, particularly when they're acting up or she's stressed in traffic. Does the car have "consciousness" because she sees a pattern of function that she interprets as human behaviors?
At the same time, we tend to dehumanize real human beings who are outside our immediate social circle. We can be much ruder to phone support or sales callers than to family members or close friends or known coworkers engaging with us on the same terms. Racism and bigotry often leads people to denude people of different ethnicities or speaking foreign languages of their humanity.
The Turing Test would suggest the proof of consciousness is merely whether another human believes the thing they're interacting with is conscious. But humans are terrible at making this kind of objective evaluation. If you program a computer to respond like a human, humans will sincerely engage with the computer as a human until the computer exhibits enough non-human flaws to dispel the illusion. At the same time, because humans regularly don't treat other humans as human, what this means in practice could be outright sadistic behavior towards the computer.
All that is to say, it's pretty clear that we're gaslighting each other with the "AI is conscious" line, from a strict technical perspective. But from a practical perspective, its really going to boil down to whether the interactions we have with an AI system are the kind that form sympathetic bonds or the kind that provoke an uncanny valley or ethno-nationalist response.
As to your second paragraph, I have the explanation right here. Really old link, formatting is all jacked up, but bear with it. This is the most important thing I've ever read, explains much of human behavior:
https://www.cracked.com/article_14990_what-monkeysphere.html
Good old Dunbar's number.
Incidentally, Jason Pargin's got a pen name - David Wong - under which he writes the horror series "John Dies At the End". He incorporates a lot of these themes in his books. "This Book Is Made of Spiders", in particular, just bludgeons you over the head with the idea of in-groups and out-groups being used to manipulate society.