this post was submitted on 08 Feb 2026
219 points (97.8% liked)

Programming

25522 readers
396 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 38 comments
sorted by: hot top controversial new old
[–] criss_cross@lemmy.world 33 points 5 days ago

I agree. I just wish others in my company did.

It’s hard to be the one not doing it when C Level people are demanding that you do everything through Claude code first and fall back if you can’t get it to work.

Don’t get me wrong, I know that a large part of this is me holding it wrong

I don’t think it is. I think LLMs have hard limits that people refuse to accept.

[–] Artaca@lemdro.id 7 points 4 days ago (1 children)

That website is delightful.

[–] hammocker@leminal.space 1 points 1 day ago

My first thought opening it up was "dang, that's cozy."

[–] mesamunefire@piefed.social 20 points 5 days ago (1 children)

Llms do create a lot of slop code thats for sure. Makes me want to get off github.

[–] Solumbran@lemmy.world 8 points 5 days ago

*most of the internet

[–] portnull@lemmy.dbzer0.com 9 points 5 days ago

A lot of good stuff here. Especially realising how useful an LLM actually is for coding. It's a tool and like most tools has a purpose and a limit. I don't use a screwdriver to put in nails (well sometimes I do at a pinch, but the results suck) or cut wood in half. Spicy autocomplete is probably a good use case, but even then "use with care" should be employed.

The whole "prompt it correctly" stuff is pn point. People have written books on how to correctly and effectively prompt the LLM. If I need to read a book to learn something, why not just read the book on how to do the thing? Or use the LLM to summarise the book, then at least you're going to get somewhat accurate information. We had someone create an AGENTS.md at work and I read it and it just sounds like a joke "You are expert in this and the human known everything. If unsure ask the human" etc. If the main gain is that I don't need to type so much I might as well use voice dictation.

That is aside the financial, environmental, health, and safety issues and damages that are all bundled in for free. If people just saw it for what it is, instead of glamourising them as the panacea for all their problems.

[–] erebion@news.erebion.eu 4 points 5 days ago

Don't fall for the scam that is "AI", think. Computers cannot think for you.

[–] olafurp@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

I used to be anti AI generated code but now I'm leaning into it. The thing is you need to engineer your context a lot and make sure that the AI has all the relevant information in the context and everything else is minimised.

The code it outputs is usually 7/10 which is below standard for many parts such as auth, access layer, abstractions etc. but completely adequate when creating a dialog for editing data as an admin user.

Don't get me wrong also, I spent 10 years coding and I fucking loved it and it's a damn shame what's happening to our craft. It's like being a guitar player and everyone uses music production software now to create what you did by just describing it instead of playing. That's the crux of the issue the way I see it, my most valuable skill is now deprecated and instead code review, explaining tasks to a junior, linking relevant quick start documentation, clarity of English explanations, architecture, knowledge of the code base, designing guidelines for how to work (like SKILLS.md files), security and creating dirty internal tooling to save you or your LLM a step are now in.

The way I see it is that a large portion of our job has changed for the worse, I don't get to just spend a day solving a problem and make the code flow through my fingers anymore, I make my "junior" do it, fix obvious bugs if any and spend the rest on QA.

[–] stressballs@lemmy.zip 5 points 5 days ago

They. They outsourced it. Did we ever really have a voice in this? Did we have a choice in what they do with the capitol they control when they wish to reshape society around it? We need to commit to anticorporate lifestyles and genuinely reject their products and services at scale. Until that happens we're spending our resources to enrich our enemies who use it against us.

[–] humanspiral@lemmy.ca 1 points 4 days ago (1 children)

For my language, J, I can't get autocomplete.

Even though J is a functional language (on extreme end), it also supports fortran/verbose python style, which LLMs will write. I don't have the problem of understanding the code it generates, and it provides useful boilerplate, with perhaps too many intermediate variables, but with the advantage that it tends to be more readable.

Instead of code complete, I get to use the generation to copy and paste into shorter performant tacit code. What is bad, is that the models lose all understanding of the code transformation, and don't understand J's threading model. The changes I make means it loses all reasoning ability about the code, and to refactor anything later. Excessive comments helps, including using comments as places to fix/generate code sections.

So, I get the warning about "code you don't understand" (but that can still happen later with code you write), and comment system helps. The other thing he got wrong is "prompt complexity/coaxing". It is actually never necessary to add "You are a senior software...". Doing so only changes the explanation level for any modern model, and opencode type tools don't or separate off the explanation section.

LLM's still have extreme flaws, but article didn't resonate on the big ones, for me.

[–] bitcrafter@programming.dev 5 points 4 days ago

The other thing he [emphasis mine] got wrong is “prompt complexity/coaxing”.

She, actually.