this post was submitted on 17 Mar 2026
717 points (98.0% liked)

Programming

26304 readers
936 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Excerpt:

"Even within the coding, it's not working well," said Smiley. "I'll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven't engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence."

Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.

"We don't know what those are yet," he said.

One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That's the kind of thing that needs to be assessed to determine whether AI helps an organization's engineering practice.

To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.

"It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."

All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.

"Coding works if you measure lines of code and pull requests," he said. "Coding does not work if you measure quality and team performance. There's no evidence to suggest that that's moving in a positive direction."

top 50 comments
sorted by: hot top controversial new old
[–] Thorry@feddit.org 117 points 2 weeks ago (4 children)

Yeah these newer systems are crazy. The agent spawns a dozen subagents that all do some figuring out on the code base and the user request. Then those results get collated, then passed along to a new set of subagents that make the actual changes. Then there are agents that check stuff and tell the subagents to redo stuff or make changes. And then it gets a final check like unit tests, compilation etc. And then it's marked as done for the user. The amount of tokens this burns is crazy, but it gets them better results in the benchmarks, so it gets marketed as an improvement. In reality it's still fucking up all the damned time.

Coding with AI is like coding with a junior dev, who didn't pay attention in school, is high right now, doesn't learn and only listens half of the time. It fools people into thinking it's better, because it shits out code super fast. But the cognitive load is actually higher, because checking the code is much harder than coming up with it yourself. It's slower by far. If you are actually going faster, the quality is lacking.

[–] Flames5123@sh.itjust.works 30 points 2 weeks ago

I code with AI a good bit for a side project since I need to use my work AI and get my stats up to show management that I’m using it. The “impressive” thing is learning new softwares and how to use them quickly in your environment. When setting up my homelab with automatic git pull, it quickly gave me some commands and showed me what to add in my docker container.

Correcting issues is exactly like coding with a high junior dev though. The code bloat is real and I’m going to attempt to use agentic AI to consolidate it in the future. I don’t believe you can really “vibe code” unless you already know how to code though. Stating the exact structures and organization and whatnot is vital for agentic AI programming semi-complex systems.

[–] chunkystyles@sopuli.xyz 22 points 2 weeks ago (1 children)

This is very different from my experience, but I've purposely lagged behind in adoption and I often do things the slow way because I like programming and I don't want to get too lazy and dependent.

I just recently started using Claude Code CLI. With how I use it: asking it specific questions and often telling it exactly what files and lines to analyze, it feels more like taking to an extremely knowledgeable programmer who has very narrow context and often makes short-sighted decisions.

I find it super helpful in troubleshooting. But it also feels like a trap, because I can feel it gaining my trust and I know better than to trust it.

[–] TehPers@beehaw.org 8 points 2 weeks ago

I've mentioned the long-term effects I see at work in several places, but all I can say is be very careful how you use it. The parts of our codebase that are almost entirely AI written are unreadable garbage and a complete clusterfuck of coding paradigms. It's bad enough that I've said straight to my manager's face that I'd be embarassed to ship this to production (and yes I await my pink slip).

As a tool, it can help explain code, it can help find places where things are being done, and it can even suggest ways to clean up code. However, those are all things you'll also learn over time as you gather more and more experience, and it acts more as a crutch here because you spend less time learning the code you're working with as a result.

I recommend maintaining exceptional skepticism with all code it generates. Claude is very good at producing pretty code. That code is often deceptive, and I've seen even Opus hallucinate fields, generate useless tests, and misuse language/library features to solve a task.

[–] merc@sh.itjust.works 12 points 1 week ago

checking the code is much harder than coming up with it yourself

That's always been true. But, at least in the past when you were checking the code written by a junior dev, the kinds of mistakes they'd make were easy to spot and easy to predict.

LLMs are created in such a way that they produce code that genuinely looks perfect at first. It's stuff that's designed to blend in and look plausible. In the past you could look at something and say "oh, this is just reversing a linked list". Now, you have to go through line by line trying to see if the thing that looks 100% plausible actually contains a tiny twist that breaks everything.

load more comments (1 replies)
[–] DickFiasco@sh.itjust.works 84 points 2 weeks ago (12 children)

AI is a solution in search of a problem. Why else would there be consultants to "help shepherd organizations towards an AI strategy"? Companies are looking to use AI out of fear of missing out, not because they need it.

load more comments (11 replies)
[–] CubitOom 64 points 2 weeks ago

Generative models, which many people call "AI", have a much higher catastrophic failure rate than we have been lead to believe. It cannot actually be used to replace humans, just as an inanimate object can't replace a parent.

Jobs aren't threatened by generative models. Jobs are threatened by a credit crunch due to high interest rates and a lack of lenders being able to adapt.

"AI" is a ruse, a useful excuse that helps make people want to invest, investors & economists OK with record job loss, and the general public more susceptible to data harvesting and surveillance.

[–] jimmux@programming.dev 59 points 2 weeks ago (1 children)

We never figured out good software productivity metrics, and now we're supposed to come up with AI effectiveness metrics? Good luck with that.

[–] Senal@programming.dev 19 points 2 weeks ago (1 children)

Sure we did.

"Lines Of Code" is a good one, more code = more work so it must be good.

I recently had a run in with another good one : PR's/Dev/Month.

Not only it that one good for overall productivity, it's a way to weed out those unproductive devs who check in less often.

This one was so good, management decided to add it to the company wide catchup slides in a section espousing how the new AI driven systems brought this number up enough to be above other companies.

That means other companies are using it as well, so it must be good.

[–] SaharaMaleikuhm@feddit.org 16 points 2 weeks ago (2 children)

Why is it always the dumbest people who become managers?

[–] yabbadabaddon@lemmy.zip 9 points 2 weeks ago

The others are busy working, they don't have time to waste drinking coffee with execs

[–] gravitas_deficiency@sh.itjust.works 47 points 2 weeks ago (1 children)

Lmfao

Deeks said "One of our friends is an SVP of one of the largest insurers in the country and he told us point blank that this is a very real problem and he does not know why people are not talking about it more."

Maybe because way too many people are making way too much money and it underpins something like 30% of the economy at this point and everyone just keeps smiling and nodding, and they’re going to keep doing that until we drive straight off the fucking cliff 🤪

[–] AnUnusualRelic@lemmy.world 11 points 2 weeks ago (1 children)

But who's making money? All the AI corps are losing billions, only the hardware vendors are making bank.

Makers of AI lose money and users of AI probably also lose since all they get is shit output that requires more work.

[–] luciole@beehaw.org 43 points 2 weeks ago (1 children)

This is all fine and dandy but the whole article is based on an interview with "Dorian Smiley, co-founder and CTO of AI advisory service Codestrap". Codestrap is a Palantir service provider, and as you'd expect Smiley is a Palantir shill.

The article hits different considering it's more or less a world devourer zealot taking a jab at competing world devourers. The reporter is an unsuspecting proxy at best.

[–] calliope@piefed.blahaj.zone 13 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

People will upvote anything if it takes a shot at AI. Even when the subtitle itself is literally an ad.

Codestrap founders say we need to dial down the hype and sort through the mess

The cult mentality is really interesting to watch.

Keep replying! Maybe this is a good honeypot for stupid people. “I hate you!!” Lmao

load more comments (2 replies)
[–] python@lemmy.world 42 points 2 weeks ago (10 children)

Recently had to call out a coworker for vibecoding all her unit tests. How did I know they were vibe coded? None of the tests had an assertion, so they literally couldn't fail.

[–] ch00f@lemmy.world 28 points 2 weeks ago (1 children)

Vibe coding guy wrote unit tests for our embedded project. Of course, the hardware peripherals aren’t available for unit tests on the dev machine/build server, so you sometimes have to write mock versions (like an “adc” function that just returns predetermined values in the format of the real analog-digital converter).

Claude wrote the tests and mock hardware so well that it forgot to include any actual code from the project. The test cases were just testing the mock hardware.

[–] 87Six@lemmy.zip 17 points 2 weeks ago

Not realizing that should be an instant firing. The dev didn't even glance a look at the unit tests...

load more comments (9 replies)
[–] magiccupcake@lemmy.world 37 points 2 weeks ago

I love this bit especially

Insurers, he said, are already lobbying state-level insurance regulators to win a carve-out in business insurance liability policies so they are not obligated to cover AI-related workflows. "That kills the whole system," Deeks said. Smiley added: "The question here is if it's all so great, why are the insurance underwriters going to great lengths to prohibit coverage for these things? They're generally pretty good at risk profiling."

[–] melsaskca@lemmy.ca 33 points 2 weeks ago

Businesses were failing even before AI. If I cannot eventually speak to a human on a telephone then the whole human layer is gone and I no longer want to do business with that entity.

[–] Not_mikey@lemmy.dbzer0.com 32 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Guy selling ai coding platform says other AI coding platforms suck.

This just reads like a sales pitch rather than journalism. Not citing any studies just some anecdotes about what he hears "in the industry".

Half of it is:

You're measuring the wrong metrics for productivity, you should be using these new metrics that my AI coding platform does better on.

I know the AI hate is strong here but just because a company isn't pushing AI in the typical way doesn't mean they aren't trying to hype whatever they're selling up beyond reason. Nearly any tech CEO cannot be trusted, including this guy, because they're always trying to act like they can predict and make the future when they probably can't.

[–] yabbadabaddon@lemmy.zip 12 points 2 weeks ago

My take exactly. Especially the bits about unit tests. If you cannot rely on your unit tests as a first assessment of your code quality, your unit tests are trash.

And not every company runs GitHub. The metrics he's talking about are DevOps metrics and not development metrics. For example In my work, nobody gives a fuck about mean time to production. We have a planning schedule and we need the ok from our customers before we can update our product.

[–] drmoose@lemmy.world 28 points 1 week ago (7 children)

People delude themselves if they think LLMs are not useful for coding. People also delude themselves that all code will be AI written in the next 2 years. The reality is that it's incredibly useful tool but with reasonable limits.

load more comments (7 replies)
[–] turbofan211@lemmy.world 23 points 2 weeks ago (36 children)

So is this just early adaptation problems? Or are we starting to find the ceiling for Ai?

[–] riskable@programming.dev 67 points 2 weeks ago (3 children)

The "ceiling" is the fact that no matter how fast AI can write code, it still needs to be reviewed by humans. Even if it passes the tests.

As much as everyone thinks they can take the human review step out of the process with testing, AI still fucks up enough that it's a bad idea. We'll be in this state until actually intelligent AI comes along. Some evolution of machine learning beyond LLMs.

[–] otacon239@lemmy.world 58 points 2 weeks ago (2 children)

We just need another billion parameters bro. Surely if we just gave the LLMs another billion parameters it would solve the problem…

[–] Thorry@feddit.org 41 points 2 weeks ago (1 children)
[–] raman_klogius@ani.social 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

That's actually three 0s too short, at the very least

load more comments (2 replies)
[–] PancakesCantKillMe@lemmy.world 26 points 2 weeks ago

One smoldering Earth later….

[–] Technus@lemmy.zip 18 points 2 weeks ago (2 children)

I realized the fundamental limitation of the current generation of AI: it's not afraid of fucking up. The fear of losing your job is a powerful source of motivation to actually get things right the first time.

And this isn't meant to glorify toxic working environments or anything like that; even in the most open and collaborative team that never tries to place blame on anyone, in general, no one likes fucking up.

So you double check your work, you try to be reasonably confident in your answers, and you make sure your code actually does what it's supposed to do. You take responsibility for your work, maybe even take pride in it.

Even now we're still having to lean on that, but we're putting all the responsibility and blame on the shoulders of the gatekeeper, not the creator. We're shooting a gun at a bulletproof vest and going "look, it's completely safe!"

[–] Feyd@programming.dev 12 points 2 weeks ago

fear of losing your job is a powerful source of motivation

I just feel good when things I make are good so I try to make them good. Fear is a terrible motivator for quality

[–] deadcream@sopuli.xyz 11 points 2 weeks ago (3 children)

So you double check your work, you try to be reasonably confident in your answers, and you make sure your code actually does what it's supposed to do. You take responsibility for your work, maybe even take pride in it.

In my experience, around 50% of (professional) developers do not take pride in their work, nor do they care.

load more comments (3 replies)
[–] saltesc@lemmy.world 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

We'll be in this state until actually intelligent AI comes along. Some evolution of machine learning beyond LLMs.

Yep. The methodology of LLMs is effectively an evolution of Markov chains. If someone hadn't recently change the definition of AI to include "the illusion of intelligence" we wouldn't be calling this AI. It's just algorithmic with a few extra steps to try keep the algorithm on-topic.

These types.of things, we have all the time in generative algorithms. I think LLMs being more publicly seen is why someone started calling it AI now.

So we've basically hit the ceiling straight out of the gate and progress is not quicker or slower. We'll have another step forward in predictive algorithms in the future, but not now. It's usually a once a decade thing and varies in advancement.

Edit: I have to point out that I initially had hope that this current iteration of "genAI" would be a very useful tool in advancing us to actual AI faster, but, no. It seems the issues of "hallucination"—which are a built-in unavoidable issue with predictive algorithms trained on unfiltered mass—is not very capable. The university I work at, we've been trying different things for the past two years, and so far there seems to be no hope. However, genAI is good at summarising mass outputs of our normal AI, which can produce a lot to comb through, but anything the genAI interpretats still needs double-checked despite closed off training.

It's been unsurprisingly disappointing.

We're still at a point where logic is done with the same old method of mass iterations. Training is slow and complex. genAI relies on being taught logic that already exists, not being able to thoroughly learn it's own. There is no logic in predictive algorithms outside of the algorithm itself, and they're very logically closed and defined.

load more comments (2 replies)
[–] CheeseNoodle@lemmy.world 23 points 2 weeks ago (2 children)

Its early adoption problems in the same way as putting radium in toothpaste was. There are legitimate, already growing uses for various AI systems but as the technology is still new there's a bunch of people just trying to put it in everything, which is innevitably a lot of places where it will never be good (At least not until it gets much better in a way that LLMs fundementally never can be due to the underlying method by which they work)

load more comments (2 replies)
[–] SpaceNoodle@lemmy.world 10 points 2 weeks ago

Those of us with eyes have already seen the ceiling of currently available GenAI "solutions," which is synonymous with early adoption problems.

The technology will evolve, and the same basic problems will exist. The article has good points about how structured acceptance criteria will need to be more strictly enforced.

[–] Semi_Hemi_Demigod@lemmy.world 10 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

My job has me working on AI stuff and it reminds me a lot of Internet technology back in the 90s.

For instance: I’m creating a local model to integrate with our MCP server. It took a lot of fiddling with a Modelfile for it to use the tools the MCP has installed. And it needs 20GB of VRAM to give reasonably accurate responses.

The amount of fiddling and checking and rough edges feel like writing JavaScript 1.0, or the switchover to HTML4.

Companies get a lot of praise for having AI products, but the reality isn’t nearly as flashy as they make it out to be. I’m seeing some usefulness in it as I learn more, but it’s not nearly what the hype machine says.

load more comments (3 replies)
load more comments (32 replies)
[–] BrightCandle@lemmy.world 16 points 1 week ago (2 children)

I keep trying to use the various LLMs that people recommend for coding for various tasks and it doesn't just get things wrong. I have been doing quite a bit of embedded work recently and some of the designs it comes up with would cause electrical fires, its that bad. Where the earlier versions would be like "oh yes that is wrong let me correct it..." then often get it wrong again the new ones will confidently tell you that you are wrong. When you tell them it set on fire they just don't change.

I don't get it I feel like all these people claiming success with them are just not very discerning about the quality of the code it produces or worse just don't know any better.

[–] Shayeta@feddit.org 9 points 1 week ago (1 children)

It is possible to get good results, the problem is that you yourself need to have an very good understanding of the problem and how to solve it, and then accurately convey that to the AI.

Granted, I don't work on embedded and I'd imagine there's less code available for AI to train on than other fields.

[–] ironhydroxide@sh.itjust.works 10 points 1 week ago

Yes, I definitely want to train a new hire who is superlatively confident that they are correct, while also having to do my job correctly as well, while said new hire keeps putting shit in my work.

load more comments (1 replies)
[–] Malgas@beehaw.org 12 points 2 weeks ago

This feels like an exercise in Goodhart's Law: Any measure that becomes a target ceases to be a useful measure.

[–] btsax@reddthat.com 11 points 2 weeks ago* (last edited 2 weeks ago)

These are starting to feel like those headlines "this is finally the last straw for Trump!" I've been seeing since 2015

[–] motruck@lemmy.zip 9 points 1 week ago

Hahaha. Im guessing this guy works in developer tools. These types of metrics are great but you rarely get there. You will get a few of them but the reality is the same people who want to use AI to produce faster are the same people that won't give you time to properly instrument your system for metrics like these. Good luck with your expectation that someone measures the impact of AI in a meaningful way.

load more comments
view more: next ›