this post was submitted on 14 Feb 2026
105 points (96.5% liked)

Technology

6099 readers
220 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

(Since this is a personal blog I'll clarify I am not the author.)

you are viewing a single comment's thread
view the rest of the comments
[–] MagnificentSteiner@lemmy.zip 16 points 14 hours ago (2 children)

Surely that should be "A Person using an AI Agent Published a Hit Piece on Me"?

This smells like PR bait trying to legitimise AI.

[–] leftzero@lemmy.dbzer0.com 2 points 42 minutes ago

The point is that there was no one at the wheel. Someone set the agent up, set it loose to do whatever the stochastic parrot told it to do, and kind of forgot about it.

Sure, if you put a brick on your car's gas pedal and let it run down the street and it runs someone over it's obviously your responsibility, and this is exactly the same case, but the idiots setting these agents up don't realise that it's the same case.

Some day one of these runaway unsupervised agents will manage to get on the dark web, hire a hitman, and get someone killed, because the LLM driving it will have pulled the words from some thriller in its training data, obviously without realising what they mean or the consequences of its actions, because those aren't things a LLM is capable of, and the brainrotten idiot who set the agent up will be all like, wait, why are you blaming me, I didn't tell it to do that, and some jury will have to deal with that shit.

The point of the article is that we should deal with that shit, and prevent it from happening if possible, before it inevitably happens.

[–] Kirk@startrek.website 11 points 9 hours ago* (last edited 9 hours ago) (1 children)

It's not, but you bring up a very good point about responsibility. We need to be using language like that and not feeding into the hype.

I don't even like calling LLMs "AI" because it gives a false impression of their capabilities.

[–] MagnificentSteiner@lemmy.zip 6 points 9 hours ago* (last edited 8 hours ago) (1 children)

Yep, they're just very fancy database queries.

Whether someone programmed it and turned it on 5mins before it did something or 5 weeks still means someone is responsible.

An inanimate object (server, GPU etc) cannot be responsible. Saying an AI agent did this is like saying someone was killed by a gun or run over by a car.

[–] leftzero@lemmy.dbzer0.com 0 points 36 minutes ago

Saying an AI agent did this is like saying someone was killed by a gun or run over by a car.

A car some idiot set running down the street without anyone at the wheel.

Of course the agent isn't responsible, that's the point. The idiot who let the agent loose on the internet unsupervised probably didn't realise it could do that (or worse; one of these days one of these things is going to get someone killed), or that they are responsible for its actions.

That's the point of the article, to call attention to the danger these unsupervised agents pose, so we can try to find a way to prevent them from causing harm.