this post was submitted on 24 Mar 2026
210 points (99.5% liked)

Technology

42605 readers
508 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] p03locke@lemmy.dbzer0.com 4 points 1 week ago

LLMs for coding has improved dramatically over the past year or so. But, I find that its quality varies greatly, depending on the model. I find models like Gemini and GPT to be too overconfident, and it doesn't communicate well enough. Claude knows when to stop and evaluate the situation for options. I've had mixed results with the local models, but I'm still adjusting quantization settings to make it work best with my VRAM.

You still need the skills to understand programming and design engineering, and you frankly need the personality to be meticulous with your reviews, but it's really nice having something that can code 3-8x faster than what I was doing before.

PrimeTime had a good recent video about a senior programmer's experience with fixing a very hard to find bug.