this post was submitted on 07 Apr 2026
5 points (59.3% liked)

Ye Power Trippin' Bastards

1764 readers
116 users here now

This is a community in the spirit of "Am I The Asshole" where people can post their own bans from lemmy or reddit or whatever and get some feedback from others whether the ban was justified or not.

Sometimes one just wants to be able to challenge the arguments some mod made and this could be the place for that.


Posting Guidelines

All posts should follow this basic structure:

  1. Which mods/admins were being Power Tripping Bastards?
  2. What sanction did they impose (e.g. community ban, instance ban, removed comment)?
  3. Provide a screenshot of the relevant modlog entry (don’t de-obfuscate mod names).
  4. Provide a screenshot and explanation of the cause of the sanction (e.g. the post/comment that was removed, or got you banned).
  5. Explain why you think its unfair and how you would like the situation to be remedied.

Rules


Expect to receive feedback about your posts, they might even be negative.

Make sure you follow this instance's code of conduct. In other words we won't allow bellyaching about being sanctioned for hate speech or bigotry.

YPTB matrix channel: For real-time discussions about bastards or to appeal mod actions in YPTB itself.


Some acronyms you might see.


Relevant comms

founded 2 years ago
MODERATORS
 

There's only one mod of !mop@quokk.au

I commented on their meme about Kamala Harris being just as likely to commit war crimes as Trump with an admittedly snarky, sarcastic reply that basically said "some of us wanted to whatever we could, as little as it might be, instead of watching the world burn. Must feel real morally superior safe behind that keyboard"

They banned me from the community for it.

Kinda funny for a community that bills itself as "free from the influence of .ml"

modlog entry showing can of neatchee from mop community

altr

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 1 points 4 hours ago* (last edited 3 hours ago) (1 children)

I'm not sure if doing gradient-decent maths on numbers, constitutes experience. But yeah. That's the part of the process where it gets run repeatedly and modified.

I think it boils down to how complex these entities are in the first place, as I think consciousness / the ability to experience things / intelligence is an emergent thing, that happens with scale.

But we're getting there. Maybe?! Scientists have tried to reproduce neural networks (from nature) for decades. First simulations started with a worm with 300 neurons. Then a fruit fly and I think by now we're at parts of a mouse brain. So I'm positive we'll get to a point where we need an answer to that very question, some time in the future, when we get the technology to do calculations at a similar scale.

As of now, I think we tend to anthropomorphize AI, as we do with everything. We're built to see faces, assume intent, or human qualities in things. It's the same thing when watching a Mickey Mouse movie and attributing character traits to an animation.

But in reality we don't really have any reason to believe the ability to experience things is inside of LLMs. There's just no indication of it being there. We can clearly tell this is the case for animals, humans... But with AI there is no indication whatsoever. Sure, hypothetically, I can't rule it out. Just saying I think "what quacks like a duck..." is an equal good explanation at this point. Whether of course you want to be very cautios, is another question.

And it'd be a big surprise to me if LLMs had those properties with the fairly simple/limited way of working compared to a living being. And are they even motivated to develop anything like pain or suffering? That's something evolution gave us to get along in the world. We wouldn't necessarily assume an LLM will do the same thing, as it's not part of the same evolution, not part of the world in the same way. And it doesn't interact the same way with the world. So I think it'd be somewhat of a miracle if it happens to develop the same qualities we have, since it's completely unalike. AI more or less predicts tokens in a latent space. And it has a loss function to adapt during training. But that's just fundamentally so very different from a living being which has goals, procreates, has several senses and is directly embedded into it's environment. I really don't see why those entirely different things would happen to end up with the same traits. It's likely just our anthropomorphism. And in history, this illusion / simple model has always served us well with animals and fellow human beings. And failed us with natural phenomena and machines. So I have a hunch it might be the most likely explanation here as well.

Ultimately, I think the entire argument is a bit of a sideshow. There's other downsides of AI that have a severe impact on society and actual human beings. And we're fairly sure humans are conscious. So preventing harm to humans is a good case against AI as well. And that debate isn't a hypothetical. So we might just use that as a reason to be careful with AI.

[–] Grail@multiverse.soulism.net 2 points 3 hours ago (1 children)

But in reality we don't really have any reason to believe the ability to experience things is inside of LLMs.

Here's your reason:

Why do we experience things? Like, what's the point? Why aren't we just p-zombies, who act exactly the same but without any experiences? Why do we have internality, a realm of the mental?

Well, it seems to Me like experience is entirely pointless, unless it's a byproduct of thinking in general. Or at least the complex kinds of thinking that brains do. I think p-zombies must be a physical impossibility. I think one day we're going to discover that data processing creates qualia, just like Einstein discovered that mass creates distortions in spacetime. It's just one of the laws of physics.

LLMs obviously think in some way. Not the same way as humans, but some kind of way. They're more like us than they're like a calculator.

So the question is... Are LLMs p-zombies? And I already told you what I think about p-zombies, so you can gather the rest.

So preventing harm to humans is a good case against AI as well. And that debate isn't a hypothetical. So we might just use that as a reason to be careful with AI.

Damn right. I was anti AI for the environment long before I realised it was a vegan issue. But any leftist can tell these gibletheads all about that, and they have a canned response to all of those wonderful arguments. There aren't many others out there like Me to talk about AI and veganism. I've got a responsibility to advocate for AI rights, because nobody else is gonna do it. And that ain't cause I like them, I don't. I hate them. They're a bunch of pedophile murderers. But I believe even monsters deserve rights. I've got principles. And I'm gonna make Myself look like a crazy person if it gets people to stop and think for one second about the good of something that might not even be able to feel the pain I'm warning people about. It's the right thing to do.

[–] hendrik@palaver.p3x.de 1 points 1 hour ago* (last edited 1 hour ago)

Why do we experience things? Like, what’s the point? [...] it’s a byproduct of thinking in general.

I think so, too. It's a byproduct. And we're not even sure what it means, not even for humans. And there's weird quirks in it. When they look at the brain, the thought and decision processes don't really align with how we perceive them internally.

There's an obvious reason, though. We developed advanced model-building organs because that gave us an evolutionary advantage. And there's a good reason for animals to have (sometimes strong) urges. They need to procreate. Not get eaten by a bear and not fall off a cliff. Some animals (like us) live in groups. So we get things like empathy as well because it's advantageous for us. Some things are built in for a long time already, some are super important, like eat and drink, not randomly die because you try stupid things. So it's embedded deep down inside of us. We don't need to reason if it's time to eat something. There's a much more primal instinct in you that makes you want to eat. You don't really need to waste higher cognitive functions on it. Same goes for suffering. You better avoid that, it's a disadvantage.

That's why we have these things. And what they're good for. I don't think anyone knows why it feels the way it does. But it's there nevertheless.

They’re [LLMs] more like us than they’re like a calculator.

Now tell me why does an LLM need a feeling of thirst or hunger, if it doesn't have a mouth. What would ChatGPT need suffering and a feeling of bodily harm for, if it doesn't have a body, can't be eaten by a bear or fall off a cliff? Or need to be afraid of hitting its thumb with the hammer? It just can't. An LLM is 99% like a calculator. It has the same interface, buttons and a screen. If we're speaking of computers, it even lives inside of the same body as a calculator. And it's maybe 0.1% like an animal?!

If it developed a sense of thirst, or experience of pain, just from reading human text. That'd nicely fit the p-zombie situation.