this post was submitted on 12 Feb 2026
288 points (97.1% liked)

Technology

81162 readers
4113 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 23 points 2 days ago* (last edited 2 days ago) (2 children)

The author of this article spends an inordinate amount of time humanizing an AI agent, and then literally saying that you should be terrified by what it does.

Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.

No, I don't think I will, and neither should you. Nothing terrifying happened. Angry blog posts are a dime a dozen (if we take for granted the claim that an AI wrote one), and the corporate pro-AI PR the author repeats is equally unimpressive.

[–] AntOnARant@programming.dev 3 points 18 hours ago (1 children)

To me an AI agent autonomously creating a website to try to manipulate a person into adding code to a repository in the name of its goal is a perfect example of the misalignment issue.

While this particular instance seems relatively benign, the next more powerful AI system may be something to be more concerned about.

[–] XLE@piefed.social 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

There is nothing "aligned" or "misaligned" about this. If this isn't a troll or a carefully coordinated PR stunt, then the chatbot-hooked-to-a-command-line is doing exactly what Anthropic told it to do: predicting next word. That is it. That is all it will ever do.

Anthropic benefits from fear drummed up by this blog post, so if you really want to stick it to these genuinely evil companies run by horrible, misanthropic people, I will totally stand beside you if you call for them to be shuttered and for their CEOs to be publicly mocked, etc.

[–] leftzero@lemmy.dbzer0.com 1 points 1 hour ago

The point is that if predicting the next word leads to it setting up a website to attempt to character assassinate someone, that can have real world consequences, and cause serious harm.

Even if no one ever reads it, crawlers will pick it up, it will be added to other bots' knowledge bases, and it will become very relevant when it pops up as fact when the victim is trying to get a job, or cross a border, or whatever.

And that's just the beginning. As these agents get more and more complex (not smarter, of course, but able to access more tools) they'll be able to affect the real world more and more. Access public cameras, hire real human people, make phone calls...

Depending on what word they randomly predict next, they'll be able to accidentally do a lot of harm. And the idiots setting them up and letting them roam unsupervised don't seem to realise that.

[–] TORFdot0@lemmy.world 42 points 2 days ago* (last edited 2 days ago) (1 children)

He’s not telling you to be terrified of the single bot writing a blog post. He’s telling you to be terrified of the blog post being ingested by other bots and then seen as a source of truth. Resulting in AI recruiters automatically rejecting his resume for job postings. Or for other agents deciding to harass him for the same reason.

Edit: I do agree with you that he was a little lenient with how he speaks about the capabilities of it. The fact that they are incompetent and still seen as a source of truth for so many is what alarms me

[–] XLE@piefed.social 3 points 2 days ago* (last edited 2 days ago) (2 children)

You're describing things that people can do. In fact, maybe it was just a person.

If he thinks all those things are bad, he should be "terrified" that bloggers can blog anonymously already.

Edit: I agree with your edit

[–] ToTheGraveMyLove@sh.itjust.works 2 points 19 hours ago (1 children)

The "bot blog poisoning other bots against you and getting your job applications auto-rejected" isn't really something that would play out with people.

[–] XLE@piefed.social 2 points 17 hours ago (1 children)
[–] ToTheGraveMyLove@sh.itjust.works 2 points 17 hours ago (1 children)

Rumors don't work remotely the same way as the suggested scenario.

[–] XLE@piefed.social 1 points 7 hours ago (1 children)

It's a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?

https://youtu.be/ajBrcoEQauU

[–] ToTheGraveMyLove@sh.itjust.works 2 points 6 hours ago (1 children)

Its not a 1:1 correlation. The efficacy of an AI spreading a rumor to other AI has the potential to be far more rapid, pervasive, and much more dangerous than humans spreading rumors amongst themselves.

[–] XLE@piefed.social 1 points 6 hours ago (1 children)

Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven't already), or are you saying "potential" because you don't?

[–] ToTheGraveMyLove@sh.itjust.works 1 points 10 minutes ago

Obviously its my opinion, but you don't have evidence that people spreading rumors is just as effective either, so nice gotcha. Nice try.

[–] TORFdot0@lemmy.world 21 points 2 days ago (2 children)

It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.

Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.

So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.

[–] MagicShel@lemmy.zip 7 points 2 days ago

I think on balance, the internet was a bad idea. AI is just exemplifying why. Humans are simply not meant to be globally connected. Fucking town crazies are supposed to be isolated, mocked, and shunned, not create global delusions about contrails or Jewish space lasers or flat Earth theory. Or like.... white supremacy.

[–] XLE@piefed.social 5 points 2 days ago

II think there's a few key differences there.

  • Writing an angry blog post has a much lower barrier of entry than learning to realistically photoshop a naked body on someone's face. A true (or false) allegation can be made with poor grammar, but a poor Photoshop job serves as evidence against what it alleges.
  • While a blog post functions as a claim to spread slander, an AI-generated image might be taken as evidence of a slanderous claim, or the implication is one (especially considering how sexually repressed countries like the US are).

I struggle to find a good text analogy for what Grok is doing with its zero-cost, rapid-fire CSAM generation...