this post was submitted on 02 Feb 2026
344 points (97.0% liked)

Technology

83295 readers
4581 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] WanderingThoughts@europe.pub 185 points 1 month ago* (last edited 1 month ago) (26 children)

Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn't way cheaper than a taxi now.

[–] percent 6 points 1 month ago (16 children)

I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.

GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.

[–] WanderingThoughts@europe.pub 19 points 1 month ago (8 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.

[–] percent 9 points 1 month ago* (last edited 1 month ago) (2 children)

They don't need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.

I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.

It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn't have configs/docs/optimizations for LLMs, or you haven't figured out a decent workflow, then they'll be underwhelming and significantly less productive.


(I know I'll get downvoted just for describing my experience and observations here, but I don't care. I miss the pre-LLM days very much, but they're gone, whether we like it or not.)

[–] WanderingThoughts@europe.pub 10 points 1 month ago (1 children)

It actually takes a bit of skill to set up a decent workflow/configuration for these things

Exactly this. You can't just replace experienced people with it, and that's basically how it's sold.

[–] percent 4 points 1 month ago

Yep, it's a tool for engineers. People who try to ship vibe-coded slop to production will often eventually need an engineer when things fall apart.

[–] RIotingPacifist@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

This sounds a lot like every framework, 20 years ago you could have written that about rails.

Which IMO makes sense because if code isn't solving anything interesting then you can dynamically generate it relatively easily, and it's easy to get demos up and running, but neither can help you solve interesting problems.

Which isn't to say it won't have a major impact on software for decades, especially low-effort apps.

load more comments (5 replies)
load more comments (12 replies)
load more comments (21 replies)