this post was submitted on 31 Mar 2026
338 points (99.7% liked)

Technology

83251 readers
3310 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 46 comments
sorted by: hot top controversial new old
[–] Dentzy@sh.itjust.works 2 points 1 hour ago

I was like "Ha, ha nice April's fools"... Then I keep reading the comments and... WTF‽

[–] captcha_incorrect@lemmy.world 9 points 4 hours ago
[–] Fmstrat@lemmy.world 8 points 5 hours ago

At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.

Actual project knowledge is distributed across "topic files" fetched on-demand, while raw transcripts are never fully read back into the context, but merely "grep’d" for specific identifiers.

This "Strict Write Discipline"—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.

For competitors, the "blueprint" is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a "hint," requiring the model to verify facts against the actual codebase before proceeding.

Interesting to see if continue.dev takes advantage of this methodology. My only complaint has been context with it.

[–] WhyJiffie@sh.itjust.works 10 points 7 hours ago

In this mode, the agent performs "memory consolidation" while the user is idle. The autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.

this blog post reads like a marketing piece

[–] jivandabeast@lemmy.browntown.dev 45 points 12 hours ago
[–] CorrectAlias@piefed.blahaj.zone 54 points 14 hours ago (1 children)

Be careful not to introduce security vulnerabilities such as command injection, XSS, SQL injection, and other OWASP top 10 vulnerabilities. If you notice that you wrote insecure code, immediately fix it.

Lmao. I'm sure that will solve the problem of it writing insecure slop code.

[–] filcuk@lemmy.zip 21 points 12 hours ago

It doesn't fix it, but as stupid as it looks, it should actually improve the chances.
If you've seen how the reasoning works, they basically spit out some garbage, then read it again and think whether it's garbage enough or not.
They do try to 'correct their errors', so to say.

[–] Encephalotrocity@feddit.online 250 points 20 hours ago (6 children)

Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

[–] a4ng3l@lemmy.world 10 points 11 hours ago

In Europe we have the AI act which, as of August, will introduce some form of transparency obligations. Not perfect obviously but a start. Probably will not be followed by the rest of the world though so like GDPR it will be forcibly eroded by other’s interests through lobbying but at least we try.

[–] merc@sh.itjust.works 96 points 19 hours ago (1 children)

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

This is so incredibly stupid.

You've tried security.

You've tried security through obscurity.

Now try security through giving instructions to an LLM via a system prompt to not blow its cover.

That doesn't sound like it is saying don't identify yourself. That it's called claude isn't internal information. So it doesn't seem that instruction is doing tpwhat you are saying. Must be more instructions.

[–] JohnEdwa@sopuli.xyz 5 points 14 hours ago (1 children)

With how massive of a computer science field artificial intelligence is and how much of it already is or is getting added to every piece of software that exists, a label like that would be equally useless as the California prop 65 cancer warnings.

Do you use a mobile keyboard that supports swipe typing and has autocorrect? Remember to mark everything you write as being AI assisted.

[–] mrbutterscotch@feddit.org 1 points 4 hours ago

Well yes, if you let autocorrect write code contribution, I think you should lable that contribution as AI.

[–] GhostlyPixel@lemmy.world 1 points 11 hours ago

What internal info are they worried about leaking in a commit message? If you don’t want it to add the standard Claude attribution, you can completely disable it in the settings, or just write your own commit messages.

[–] itisileclerk@lemmy.world 8 points 12 hours ago

The best learning method is from your own mistakes. So, Claude is still learning.

[–] rimu@piefed.social 101 points 20 hours ago* (last edited 19 hours ago) (2 children)

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

Lol 😂

[–] ellen.kimble@piefed.social 13 points 15 hours ago (1 children)

This is because if an unrelated hack on npm’s latest build. Anyone with this version of npm is affected

[–] criss_cross@lemmy.world 4 points 11 hours ago

That axios supply chain attack was a bitch. There were extensions compromised from that shit.

[–] DacoTaco@lemmy.world 2 points 10 hours ago

Its bad advise too, because the malware removed itself from those files to removed traces of itself

[–] NocturnalMorning@lemmy.world 15 points 17 hours ago (2 children)

By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter).

Ha, by an intern

[–] djmikeale@feddit.dk 5 points 10 hours ago

Nice. One of the ways to write Chaofan in Chinese is 炒饭, which means fried rice. Amazing to be able to get that Twitter handle

[–] spez@sh.itjust.works 22 points 20 hours ago (3 children)

I mean it's not that big a deal. However, it would another thing if the model itself leaked. Now that would be something.

[–] MangoCats@feddit.it 7 points 17 hours ago

As they tell it, Claude Code is over 80% written by the models anyway...

[–] obbeel@lemmy.eco.br 3 points 15 hours ago (1 children)

Tool usage is very important. Qwen3.5 (135b) can already do wonderful things on OpenCode.

[–] cecilkorik@piefed.ca 9 points 14 hours ago* (last edited 14 hours ago) (1 children)

I dabble in local AI and this always blows my mind. How do people just casually throw 135b parameter models around? Are people like, renting datacenter hardware or GPU time or something, or are people just building personal AI servers with 6 5090s in them, or are they quantizing them down to 0.025 bits or what? what's the secret? how does this work? am I missing something? like the Q4 of Qwen3.5 122B is between 60-80GB just for the model alone. That's 3x 5090s minimum, unless I'm doing the math wrong, and then you need to fit the huge context windows these things have in there too. I don't get it.

Meanwhile I'm over here nearly burning my house down trying to get my poor consumer cards to run glm-4.7-flash.

[–] obbeel@lemmy.eco.br 4 points 13 hours ago

I pay for Ollama Cloud. As for the training of the big models, big companies do it using who-knows-what resources.

[–] lexiw@lemmy.world 7 points 19 hours ago

The harness is as important as the model

[–] RIotingPacifist@lemmy.world 3 points 13 hours ago

This is just the UI right? Or the models too?

[–] pelespirit@sh.itjust.works 19 points 20 hours ago* (last edited 20 hours ago) (1 children)

Like a healthy brain. And just like a healthy brain, it'll still hallucinate and make mistakes probably:

The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval.

As analyzed by developers like @himanshustwts, the architecture utilizes a "Self-Healing Memory" system.

[–] Semi_Hemi_Demigod@lemmy.world 17 points 20 hours ago (3 children)

We’re gonna make AGI and realize that being stupid sometimes and making mistakes is integral to general intelligence.

[–] Didntdoit71@feddit.online 9 points 17 hours ago (1 children)

Actually, the people in the know...already knew this. We've known for years. Mistakes are required for learning.

[–] maplesaga@lemmy.world 2 points 12 hours ago* (last edited 12 hours ago)

A mistake is maybe just allowing room for evolution to take place?

[–] MangoCats@feddit.it 6 points 17 hours ago

being stupid sometimes and making mistakes is integral to general intelligence.

Smart people figured this out a long time ago.

https://www.amazon.com/s?k=nassim+taleb+antifragile&adgrpid=187118826460

https://www.goodreads.com/en/book/show/18378002-intuition-pumps-and-other-tools-for-thinking

[–] a4ng3l@lemmy.world 0 points 11 hours ago

That’s what makes us humans at least…