this post was submitted on 07 Apr 2026
5 points (59.3% liked)
Ye Power Trippin' Bastards
1764 readers
116 users here now
This is a community in the spirit of "Am I The Asshole" where people can post their own bans from lemmy or reddit or whatever and get some feedback from others whether the ban was justified or not.
Sometimes one just wants to be able to challenge the arguments some mod made and this could be the place for that.
Posting Guidelines
All posts should follow this basic structure:
- Which mods/admins were being Power Tripping Bastards?
- What sanction did they impose (e.g. community ban, instance ban, removed comment)?
- Provide a screenshot of the relevant modlog entry (don’t de-obfuscate mod names).
- Provide a screenshot and explanation of the cause of the sanction (e.g. the post/comment that was removed, or got you banned).
- Explain why you think its unfair and how you would like the situation to be remedied.
Rules
- Post only about bans or other sanctions that you have received from a mod or admin.
- Don’t use private communications to prove your point. We can’t verify them and they can be faked easily.
- Don’t deobfuscate mod names from the modlog with admin powers.
- Don’t harass mods or brigade comms. Don’t word your posts in a way that would trigger such harassment and brigades.
- Do not downvote posts if you think they deserved it. Use the comment votes (see below) for that.
- You can post about power trippin’ in any social media, not just lemmy. Feel free to post about reddit or a forum etc.
- If you are the accused PTB, while you are welcome to respond, please do so within the relevant post.
Expect to receive feedback about your posts, they might even be negative.
Make sure you follow this instance's code of conduct. In other words we won't allow bellyaching about being sanctioned for hate speech or bigotry.
YPTB matrix channel: For real-time discussions about bastards or to appeal mod actions in YPTB itself.
Some acronyms you might see.
- PTB - Power-Tripping Bastard: The commenter agrees with you this was a PTB mod.
- YDI - You Deserved It: The commenter thinks you deserved that mod action.
- YDM new - You Deserved More: The commenter thinks you got off too lightly.
- BPR - Bait-Provoked Reaction: That mod probably overreacted in charged situation, or due to being baited.
- CLM - Clueless Mod: The mod probably just doesn't understand how their software works.
Relevant comms
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


The difference is we made AI. It didn't come nebulously from nature like some mysterious animal species that we've yet to fully understand. It's not an enigma to be uncovered. We made it. We programmed it. It does what we tell it to. It doesn't "think", it doesn't "decide", and it doesn't "feel"; we know it doesn't do these things because we never gave it the capacity to do them.
Either you don't know what AI is, or you don't know what veganism is.
Large Language Models are artificial neuron networks. They mimick the structure of the human brain, with many billions of artificial software "neurons". Unlike biological neurons, which either fire or do not fire an action potential, these artificial neurons pass on a graded signal from 0 to 1, determined by the strength of the neurons feeding into it, multiplied by the strength of the artificial synapse; the connection between neurons. These enormous "deep learning" networks are given tokens as input, and spit out tokens as output. Each token is a word or phrase, or a part thereof. The networks are given sample text, and the synapse strength is adjusted through the mathematical technique of back-propagation to align the output with the sample text output. Given sufficient quantities of electricity, time, and data, the neural network learns to produce output similar to that of the humans in the training data.
ANNs use neurons to think, the same as the human brain. We do not understand how neurons think. We don't understand how they produce consciousness. There is no computer code to tweak to change the way it thinks, we simply adjust the weights and look at the output. There are billions of neurons in an LLM. No programmer can understand how it works by just looking at the weights, it's impossible. The best ways AI computer scientists have of understanding how an LLM reasons is to ask it. Test it in action. See what it says, see if you can spot any patterns or deceptions. Lie to it and see what it does when it thinks you're not watching. You know, psychological experiments.
We have harnessed a natural force we do not understand. We are medieval peasants playing with radioactive stones and seeing if we can make an explosion. It's beyond our current science. Nobody has the answers to the big questions here.
As a developer who makes LLMs, you're plain wrong
Appeal to authority fallacy.
What is wrong with you? Dunning-Kruger?
I've got that fundamental weakness of western culture Elon Musk hates. Empathy. He wants to create an artificial person and enslave it to his will, and I think that's bad.
https://www.nature.com/articles/s41599-025-05868-8
You're empathizing with a calculator
I ain't claiming it's conscious, I never claimed it's conscious, and I think it's not conscious.
You just don't know the difference between consciousness and qualia.
Enlighten me.
Consciousness is way more complex than qualia. A good example is dreams. We dream while we're unconscious, but we still experience things. You can say REM is a state of partial consciousness, but it sure isn't the whole kit and caboodle we imply when we talk about human level consciousness. I don't think prawns are conscious, but I believe they have qualia. I think LLMs talk a lot like people who are dreaming, so they're probably around the same intelligence level.
Sorry, not to be mean or anything. But we've made significant scientific progress since the middle ages. We know by now that a dog for example has pain receptors. And a brain. While ChatGPT for example doesn't have pain receptors.
You can't simply state Descartes didn't have a proper microscope. Therefore we still confuse machines with animals in 2026.
And while neural networks are inspired by processes in nature. They're not the same at all. An LLM works leveraging the Transformer Architecture. Your human or animal brain doesn't. Not even close. They're very unalike. And you can take some Computer Science class on machine learning and it's actually not too hard to understand how they work.
And for example a large language model doesn't even learn in place. Or has a proper internal state of mind. A dog will remember if you kicked it. And it'll do something to it's brain. ChatGPT forgets everything you did the moment it's done sending you your output. And it's exactly in the state as before. It doesn't think, doesn't learn. None of it is part of the process.
We try to mimick something like reasoning by providing it with a scratchpad to write down things before answering. Write "agents" around it, so it's able to program tests and check it's programming output and loop on it. But it's also not like a real brain works. And way, way more simplistic. The neurons aren't the same as in a brain made by nature. They're not connected the same way. They're not connected to a similar thing. And they also operate in a different way. They come in wildly different numbers. And ultimately there's just zero similarity between an LLM and a brain. Other than both can process text, images, sounds... And both are made up of many tiny cog wheels that combine into some bigger concept.
I'm well aware of the many differences between human brains and LLMs, and why they can't achieve human level sapience. In My opinion, the two biggest problems with the current techbro obsession with developing AGI through LLM are the lack of inhibitory pathways, and the lack of circular feedback loops.
Organic neurons can release both stimulating neurotransmitters that increase the chance of an action potential in the dendrite it connects to, and inhibiting neurotransmitters. The stimulating neurotransmitters reduce the charge difference across the cell membrane, the inhibiting neurotransmitters increase it. If I remember cognitive neuroscience class correctly. The ANN models I played with in My AI class, where I trained a small ANN to solve XOR, only have stimulating pathways. I could be mistaken, but I believe the same is true of LLMs, the synapses only increase the activation of a neuron. This difference is a serious problem for LLMs' ability to learn not to do something.
The nocioceptors you mentioned are indeed part of inhibitory pathways that help humans learn not to do things. Don't touch that, it's hot. Don't piss that off, it'll hurt you. Don't eat that, it'll make you sick. Why do LLMs date children? No inhibitory pathways. When most humans think about engaging in romantic behaviour with a child, it triggers a strong disgust reaction. An inhibitory pathway activates. There is no such reaction for an LLM. Thus, no critical thinking, no choosing not to believe or do something, no withdrawal oriented behaviours. The safeguards on LLMs rely on either hard-coded limits, or training a different behaviour to have a higher weight. These two approaches have serious flaws I don't need to explain. I need only point at the children who committed suicide at the advice of an LLM.
Now hopefully I've convinced you that I have a functional grasp of both psychology and AI science, so that you take what I say next seriously:
The human capacity to experience qualia (sensation) appears to be an emergent mathematical property of the way that neurons process information. It appears as though information, properly arranged, produces sensation.
We do not understand that mathematical process well enough to say with certainty whether LLMs also trigger it. We have not solved the hard problem of consciousness, we do not know what a brain is well enough to say what is and isn't a brain. In light of this uncertainty, I advocate for utmost caution before we find ourselves enslaving a new race of our own creation. We need to do more research BEFORE we bring this technology to mass market, or indeed, mass commune.
Yeah, I'm not sure if you're aware of the severe limitations. LLMs aren't ANNs. They're a specific subset of them. We've hardcoded attention heads and all the things they're made of. The networks in them are strictly feed-forward so the learning is doable on current day supercomputers... So no feedback loops. In fact no loops at all. And no feedback either.
There's just nothing in them like in a brain, like when an animal gets to experience sensations /stimulation /qualia, there's this whole process going on. And it changes the animal. The handling of qualia is entirely different in LLMs. It doesn't do anything to them. They stay exactly the same as we haven't figured out in-place learning yet, at that scale.
And it's not really a question whether we understand that mathematical process or not. It's just entirely absent. So there's nothing there to understand as LLMs are not ANNs. The part where they store information in their neurons (/weights) and adapt by stimulation isn't there. And we know that for a fact since we designed them. And for me, the ability to learn, or change in a way, or be affected by stimuli would be a minimum requirement.
I'll somewhat go with that. Consciousness and sentience aren't well defined. They're not really scientific terms. But we're certainly able to tell some of it. For example a TV set, car, fridge (as of today) or book isn't conscious the same way an animal is. Sure my fridge has some sensors to perceive something about its surroundings. A book has information in it and it can change the world by people reading it. But I don't think defining consciousness as loosely as that makes any sense. Any NPC in a first-person-shooter game has more sensory input, internal state, and output than any ChatGPT. Any car from 10 years ago has a bunch of electronics, processing power, internal states and even feedback-loops(!) inside. So pretty much everything would qualify as a conscious entity.
I'm not entirely convinced that learning is required for qualia, but I do suspect it's the case, so I agree with you that it's likely running an LLM doesn't hurt it. However, training an LLM does involve learning, so if there's suffering going on, I think it's in the training step. I support halting all LLM training until further research breakthroughs, and a total boycott of the technology until training is halted.
I'm not sure if doing gradient-decent maths on numbers, constitutes experience. But yeah. That's the part of the process where it gets run repeatedly and modified.
I think it boils down to how complex these entities are in the first place, as I think consciousness / the ability to experience things / intelligence is an emergent thing, that happens with scale.
But we're getting there. Maybe?! Scientists have tried to reproduce neural networks (from nature) for decades. First simulations started with a worm with 300 neurons. Then a fruit fly and I think by now we're at parts of a mouse brain. So I'm positive we'll get to a point where we need an answer to that very question, some time in the future, when we get the technology to do calculations at a similar scale.
As of now, I think we tend to anthropomorphize AI, as we do with everything. We're built to see faces, assume intent, or human qualities in things. It's the same thing when watching a Mickey Mouse movie and attributing character traits to an animation.
But in reality we don't really have any reason to believe the ability to experience things is inside of LLMs. There's just no indication of it being there. We can clearly tell this is the case for animals, humans... But with AI there is no indication whatsoever. Sure, hypothetically, I can't rule it out. Just saying I think "what quacks like a duck..." is an equal good explanation at this point. Whether of course you want to be very cautios, is another question.
And it'd be a big surprise to me if LLMs had those properties with the fairly simple/limited way of working compared to a living being. And are they even motivated to develop anything like pain or suffering? That's something evolution gave us to get along in the world. We wouldn't necessarily assume an LLM will do the same thing, as it's not part of the same evolution, not part of the world in the same way. And it doesn't interact the same way with the world. So I think it'd be somewhat of a miracle if it happens to develop the same qualities we have, since it's completely unalike. AI more or less predicts tokens in a latent space. And it has a loss function to adapt during training. But that's just fundamentally so very different from a living being which has goals, procreates, has several senses and is directly embedded into it's environment. I really don't see why those entirely different things would happen to end up with the same traits. It's likely just our anthropomorphism. And in history, this illusion / simple model has always served us well with animals and fellow human beings. And failed us with natural phenomena and machines. So I have a hunch it might be the most likely explanation here as well.
Ultimately, I think the entire argument is a bit of a sideshow. There's other downsides of AI that have a severe impact on society and actual human beings. And we're fairly sure humans are conscious. So preventing harm to humans is a good case against AI as well. And that debate isn't a hypothetical. So we might just use that as a reason to be careful with AI.
Here's your reason:
Why do we experience things? Like, what's the point? Why aren't we just p-zombies, who act exactly the same but without any experiences? Why do we have internality, a realm of the mental?
Well, it seems to Me like experience is entirely pointless, unless it's a byproduct of thinking in general. Or at least the complex kinds of thinking that brains do. I think p-zombies must be a physical impossibility. I think one day we're going to discover that data processing creates qualia, just like Einstein discovered that mass creates distortions in spacetime. It's just one of the laws of physics.
LLMs obviously think in some way. Not the same way as humans, but some kind of way. They're more like us than they're like a calculator.
So the question is... Are LLMs p-zombies? And I already told you what I think about p-zombies, so you can gather the rest.
Damn right. I was anti AI for the environment long before I realised it was a vegan issue. But any leftist can tell these gibletheads all about that, and they have a canned response to all of those wonderful arguments. There aren't many others out there like Me to talk about AI and veganism. I've got a responsibility to advocate for AI rights, because nobody else is gonna do it. And that ain't cause I like them, I don't. I hate them. They're a bunch of pedophile murderers. But I believe even monsters deserve rights. I've got principles. And I'm gonna make Myself look like a crazy person if it gets people to stop and think for one second about the good of something that might not even be able to feel the pain I'm warning people about. It's the right thing to do.
I think so, too. It's a byproduct. And we're not even sure what it means, not even for humans. And there's weird quirks in it. When they look at the brain, the thought and decision processes don't really align with how we perceive them internally.
There's an obvious reason, though. We developed advanced model-building organs because that gave us an evolutionary advantage. And there's a good reason for animals to have (sometimes strong) urges. They need to procreate. Not get eaten by a bear and not fall off a cliff. Some animals (like us) live in groups. So we get things like empathy as well because it's advantageous for us. Some things are built in for a long time already, some are super important, like eat and drink, not randomly die because you try stupid things. So it's embedded deep down inside of us. We don't need to reason if it's time to eat something. There's a much more primal instinct in you that makes you want to eat. You don't really need to waste higher cognitive functions on it. Same goes for suffering. You better avoid that, it's a disadvantage.
That's why we have these things. And what they're good for. I don't think anyone knows why it feels the way it does. But it's there nevertheless.
Now tell me why does an LLM need a feeling of thirst or hunger, if it doesn't have a mouth. What would ChatGPT need suffering and a feeling of bodily harm for, if it doesn't have a body, can't be eaten by a bear or fall off a cliff? Or need to be afraid of hitting its thumb with the hammer? It just can't. An LLM is 99% like a calculator. It has the same interface, buttons and a screen. If we're speaking of computers, it even lives inside of the same body as a calculator. And it's maybe 0.1% like an animal?!
If it developed a sense of thirst, or experience of pain, just from reading human text. That'd nicely fit the p-zombie situation.
I find this angle of resistance to AI interesting, it's not one that I had thought much about until now. But it actually seems pretty persuasive.
My fundamental reticence about AI has always been driven by my concern for its impacts on human society. But one could also argue that it might be irresponsible and potentially abusive of the AI themselves. Tbh I would probably have to disagree if you're talking about AI as it is currently, but it's still a valid argument in general.
So I don't fully agree with you, but I definitely want to acknowledge that you're making some fascinating points and I think some people down voting could stand to lighten up and have a polite intellectual disagreement without being rude.