this post was submitted on 12 Sep 2025
1053 points (98.8% liked)

Technology

75191 readers
2585 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

top 50 comments
sorted by: hot top controversial new old
[–] reddig33@lemmy.world 224 points 5 days ago (10 children)

“Full self driving is just 12 months away.“

[–] floofloof@lemmy.ca 37 points 5 days ago

"I'm terrified our product will be just too powerful."

[–] Catoblepas@piefed.blahaj.zone 33 points 5 days ago

On Mars by the end of this year! I mean, next year!

[–] echodot@feddit.uk 18 points 5 days ago (1 children)

Yep along with Fusion.

We've had years of this. Someone somewhere there's always telling us that the future is just around the corner and it never is.

[–] Jesus_666@lemmy.world 19 points 5 days ago (2 children)

At least the fusion guys are making actual progress and can point to being wildly underfunded – and they predicted this pace of development with respect to funding back in the late 70s.

Meanwhile, the AI guys have all the funding in the world, keep telling about how everything will change in the next few months, actually trigger layoffs with that rhetoric, and deliver very little.

load more comments (2 replies)
load more comments (7 replies)
[–] chaosCruiser@futurology.today 125 points 5 days ago (2 children)

When the CEO of a tech company says that in x months this and that will happen, you know it’s just musk talk.

load more comments (2 replies)
[–] greedytacothief@lemmy.dbzer0.com 14 points 3 days ago (3 children)

I'm not sure how people can use AI to code, granted I'm just trying to get back into coding. Most of the times I've asked it for code it's either been confusing or wrong. If I go through the trouble to write out docstrings, and then fix what the AI has written it becomes more doable. But don't you hate the feeling of not understanding what you've written does or more importantly why it's been done that way?

AI is only useful if you don't care about what the output is. It's only good at making content, not art.

I worked with someone that I later found out used AI to code her stuff. She knew how to code some, but didn't understand a lot of fundamentals.

Turns out, she would have AI write most of it, tweak it to work with her test cases, and call it good.

Half of my time was spent fixing her code, and when she was fired, our customer complaints went way down.

load more comments (2 replies)
[–] cupcakezealot@piefed.blahaj.zone 54 points 4 days ago (5 children)

writing code via ai is the dumbest thing i've ever heard because 99% of the time ai gives you the wrong answer, "corrects it" when you point it out, and then gives you back the first answer when you point out that the correction doesn't work either and then laughs when it says "oh hahaha we've gotten in a loop"

[–] cows_are_underrated@feddit.org 25 points 4 days ago (5 children)

You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn't working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

load more comments (5 replies)
load more comments (4 replies)
[–] poopkins@lemmy.world 64 points 5 days ago (21 children)

As an engineer, it's honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

[–] rozodru@piefed.social 15 points 5 days ago (1 children)

as someone who now does consultation code review focused purely on AI...nah let them continue drilling holes in their ship. I'm booked solid for the next several months now, multiple clients on the go, and i'm making more just being a digital janitor what I was as a regular consultant dev. I charge a premium to just simply point said sinking ship to land.

Make no mistake though this is NOT something I want to keep doing in the next year or two and I honestly hope these places figure it out soon. Some have, some of my clients have realized that saving a few bucks by paying for an anthropic subscription, paying a junior dev to be a prompt monkey, while firing the rest of their dev team really wasn't worth it in the long run.

the issue now is they've shot themselves in the foot. The AI bit back. They need devs, and they can't find them because putting out any sort of ad for hiring results in hundreds upon hundreds of bullshit AI generated resumes from unqualified people while the REAL devs get lost in the shuffle.

load more comments (1 replies)
[–] Blackmist@feddit.uk 15 points 5 days ago

Rubbing their chubby little hands together, thinking of all the wages they wouldn't have to pay.

load more comments (19 replies)
[–] RedFrank24@lemmy.world 32 points 4 days ago (9 children)

Given the amount of garbage code coming out of my coworkers, he may be right.

I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I'd written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn't perfect, by all means, but I can at least tell you what it's doing.

[–] Patches@ttrpg.network 20 points 4 days ago* (last edited 4 days ago) (11 children)

To be fair.

You could've asked some of those coworkers the same thing 5 years ago.

All they would've mumbled was "Something , something....Stack overflow... Found a package that does everything BUT... "

And delivered equal garbage.

load more comments (11 replies)
load more comments (8 replies)
[–] lustyargonian@lemmy.zip 7 points 3 days ago* (last edited 3 days ago)

I can say 90% of PRs in my company clearly look or declared to be AI generated because of how random things that still slip by in the commits, so maybe he's not wrong. In fact people are looked down upon if they aren't using AI and are celebrated for figuring out how to effectively make AI do the job right. But I can't say if that's the case for other companies.

[–] resipsaloquitur@lemmy.world 63 points 5 days ago (3 children)

Code has to work, though.

AI is good at writing plausible BS. Good for scams and call centers.

[–] Salvo@aussie.zone 34 points 5 days ago (1 children)
[–] Treczoks@lemmy.world 18 points 5 days ago

Parrot with a dictionary.

load more comments (2 replies)
[–] vane@lemmy.world 43 points 5 days ago (1 children)

It is writing 90% of code, 90% of code that goes to trash.

[–] Dremor@lemmy.world 15 points 5 days ago (7 children)

Writing 90% of the code, and 90% of the bugs.

load more comments (7 replies)
[–] zarkanian@sh.itjust.works 16 points 4 days ago (1 children)

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

You can't tell me these things don't have a sense of humor. This is beautiful.

load more comments (1 replies)
[–] ThePowerOfGeek@lemmy.world 43 points 5 days ago (2 children)

It's almost like he's full of shit and he's nothing but a snake oil salesman, eh.

They've been talking about replacing software developers with automated/AI systems for a quarter of a century. Probably longer then that, in fact.

We're definitely closer to that than ever. But there's still a huge step between some rando vibe coding a one page web app and developers augmenting their work with AI, and someone building a complex, business rule heavy, heavy load, scalable real world system. The chronic under-appreciation of engineering and design experience continues unabated.

Anthropic, Open AI, etc? They will continue to hype their own products with outrageous claims. Because that's what gets them more VC money. Grifters gonna grift.

load more comments (2 replies)
[–] merc@sh.itjust.works 37 points 5 days ago (3 children)

Does it count if an LLM is generating mountains of code that then gets thrown away? Maybe he can win the prediction on a technicality.

load more comments (3 replies)
[–] Catoblepas@piefed.blahaj.zone 34 points 5 days ago (4 children)

developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

I’m going to become whatever the gay version of Amish is.

load more comments (4 replies)
[–] psycho_driver@lemmy.world 30 points 5 days ago (4 children)

The good news is that AI is at a stage where it's more than capable of doing the CEO of Anthropic's job.

load more comments (4 replies)
[–] scarabic@lemmy.world 16 points 4 days ago (4 children)

These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

load more comments (4 replies)
[–] leftzero@lemmy.dbzer0.com 30 points 5 days ago* (last edited 5 days ago) (2 children)

I'm fairly certain it is writing 90% of Windows updates, at least...

load more comments (2 replies)
[–] PieMePlenty@lemmy.world 24 points 5 days ago* (last edited 5 days ago) (1 children)

Its to hype up stock value. I don't even take it seriously anymore. Many businesses like these are mostly smoke and mirrors, oversell and under deliver. Its not even exclusive to tech, its just easier to do in tech. Musk says FSD is one year away. The company I worked for "sold" things we didn't even make and promised revenue that wasn't even economically possible. Its all the same spiel.

load more comments (1 replies)
[–] Aceticon@lemmy.dbzer0.com 26 points 5 days ago

It's almost as if they shamelessly lie...

[–] EldenLord@lemmy.world 4 points 3 days ago

Well, 90% of code of which only 3% works. That sounds sbout right.

[–] clif@lemmy.world 13 points 4 days ago

O it's writing 100% of the code for our management level people who are excited about """"AI""""

But then us plebes are rewriting 95% of it so that it will actually work (decently well).

The other day somebody asked me for help on a repo that a higher up had shit coded because they couldn't figure out why it "worked" but also logged a lot of critical errors. ... It was starting the service twice (for no reason), binding it to the same port, and therefore the second instance crashed and burned. That's something a novice would probably know not to do. But, if not, immediately see the problem, research, understand, fix, instead of "Icoughbuiltcoughthis thing, good luck fuckers"

[–] panda_abyss@lemmy.ca 4 points 3 days ago

Are we counting the amount of junk code that you have to send back to Claude to rewrite because it's spent the last month totally lobotomized yet they won't issue refunds to paying customers?

Because if we are, it has written a lot of code. It's just awful code that frequently ignores the user's input and rewrites the same bug over and over and over until you get rate limited or throw more money at Anthropic.

[–] Xed@lemmy.blahaj.zone 13 points 4 days ago

these tech bros just make up random shit to say to make a profit

[–] melfie@lemy.lol 1 points 2 days ago

I use Copilot at work and overall enjoy using it. I’ve seen studies suggesting that it makes a dev maybe 15% more productive in the aggregate, which tracks with my own experience, assuming it’s used with a clear understanding of its strengths and weaknesses. No, it’s not replacing anyone, but it’s good for rubber ducking if nothing else.

AI writes 100% of my code, but this is only a small percent of the overall development effort.

[–] melsaskca@lemmy.ca 15 points 4 days ago (1 children)

Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

load more comments (1 replies)
load more comments
view more: next ›