this post was submitted on 31 Mar 2026
369 points (99.7% liked)

Technology

83295 readers
4345 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Encephalotrocity@feddit.online 261 points 1 day ago (6 children)

Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

[–] a4ng3l@lemmy.world 12 points 16 hours ago

In Europe we have the AI act which, as of August, will introduce some form of transparency obligations. Not perfect obviously but a start. Probably will not be followed by the rest of the world though so like GDPR it will be forcibly eroded by other’s interests through lobbying but at least we try.

[–] merc@sh.itjust.works 106 points 1 day ago (1 children)

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

This is so incredibly stupid.

You've tried security.

You've tried security through obscurity.

Now try security through giving instructions to an LLM via a system prompt to not blow its cover.

[–] Modern_medicine_isnt@lemmy.world 4 points 15 hours ago

That doesn't sound like it is saying don't identify yourself. That it's called claude isn't internal information. So it doesn't seem that instruction is doing tpwhat you are saying. Must be more instructions.

[–] JohnEdwa@sopuli.xyz 6 points 20 hours ago (1 children)

With how massive of a computer science field artificial intelligence is and how much of it already is or is getting added to every piece of software that exists, a label like that would be equally useless as the California prop 65 cancer warnings.

Do you use a mobile keyboard that supports swipe typing and has autocorrect? Remember to mark everything you write as being AI assisted.

[–] mrbutterscotch@feddit.org 1 points 9 hours ago

Well yes, if you let autocorrect write code contribution, I think you should lable that contribution as AI.

[–] GhostlyPixel@lemmy.world 1 points 16 hours ago

What internal info are they worried about leaking in a commit message? If you don’t want it to add the standard Claude attribution, you can completely disable it in the settings, or just write your own commit messages.