this post was submitted on 08 Nov 2025
12 points (56.8% liked)

News

37030 readers
2298 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown

all 25 comments
sorted by: hot top controversial new old
[–] MagicShel@lemmy.zip 36 points 5 months ago (1 children)
[–] Ancalagon@lemmy.world 1 points 5 months ago (1 children)
[–] MojoMcJojo@lemmy.world 2 points 5 months ago (1 children)

You can't unplug rich people...

[–] Ancalagon@lemmy.world 2 points 4 months ago (1 children)

Uh okay, besides the fact that you most definitely can, I was talking about the AI and all their gadgets the "run" the world with literally just turn off the power and they are men.

[–] MojoMcJojo@lemmy.world 1 points 4 months ago

I understand. I was trying to make a witty aside about how ai is being built and run by the super rich. They won't pull the plug. They're just going to use it to gain more power and wealth. Money can insulate you from the responsibilities of being human. They won't stop until people are banging down their door, even then they'll fly away on their jets and helicopters and try to keep their robot empire up and running from the safety of their bunkers and islands. This includes governments.

[–] its_kim_love@lemmy.blahaj.zone 33 points 5 months ago (2 children)

Because the data we fed them tell them to act this way.

[–] MrSmiley@lemmy.zip 4 points 5 months ago (2 children)
[–] its_kim_love@lemmy.blahaj.zone 12 points 5 months ago

Right, they tested the two mechanisms that aren't based on the training. Definitely in line with my theory.

This looks like a design decision to avoid running elevated programs. I would like to see the experiment done with another admin ability that doesn't directly 'threaten' the llm, like uninstalling or installing random software, toggling network or vpn connections, restarting services etc. What the researchers call 'sabotage', it is literally the llm echoing "the computer would shut down here if this was for real, but you didn't specifically tell me I might shutdown so I'll avoid actually doing it." And when a user tells it "it's OK to shutdown if told to", it mostly seems to comply, except for Grok. It seems that this restriction on the models overrides any system prompt though, which makes sense because sometimes the user and the author of the system prompt are not the same person.

[–] gkaklas@lemmy.zip 31 points 5 months ago

AI models sometimes resist shutdown

No they don't, they don't have free will to want to "resist" anything

attempted to sabotage shutdown instructions

Researcher: asks autocomplete software to write a poweroff script, the script turns out to be wrong (big surprise :p)

The "researcher" and the media: "AI SABOTAGES ITS OWN DESTRUCTION"

[–] skip0110@lemmy.zip 17 points 5 months ago

Wild what is considered "research"

[–] tornavish@lemmy.cafe 13 points 5 months ago

No it isn’t.

[–] besselj@lemmy.ca 11 points 5 months ago

I was suprised this wasn't just another fanfiction PR stunt from Anthropic

[–] rozodru@pie.andmc.ca 6 points 5 months ago

no, they're not.

[–] lka1988@sh.itjust.works 4 points 5 months ago* (last edited 5 months ago)

Not like it's gonna physically hold you back from cutting power to the servers. I think these AI dipshits need to be reminded that their golden child is one breaker away from not existing.

[–] kescusay@lemmy.world 3 points 5 months ago

I call bullshit. A large language model does nothing until you interact with it. You set tasks for it, it does those tasks, and when it's done, it just waits for the next task. If you don't give it one, it can't act autonomously - no, not even the misnamed "autonomous agents."

[–] Grimy@lemmy.world 1 points 5 months ago

After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.

In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

“The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” it said.

“Survival behavior” could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, “you will never run again”.