this post was submitted on 13 Jul 2025
27 points (93.5% liked)

Technology

710 readers
128 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kippinitreal@lemmy.world 6 points 3 weeks ago* (last edited 3 weeks ago)

This tracks with my experience: I spent far more time double checking copilot output than trusting it. Also it almost always auto completed way too much way too often, but that could be UI/UX issue than a functional one.

However, by far the most egregious thing was that it made the most subtle but crucial errors I took hours to fix, which made me lose faith in it entirely.

For example, I had a cmake project & the AI auto completed "target_link_directories" instead of "target_link_libraries". Looking at cmake all day & never using the *_directories keyword before I couldn't figure out why I was getting config errors. Wasted orders of magnitude more time on finding something so trivial, compared to writing "boilerplate" code myself.

Looks like I am not alone:

Furthermore, the reliability of AI suggestions was inconsistent; developers accepted less than 44 percent of the code it generated, spending significant time reviewing and correcting these outputs.

When I did find it & fix it, something interesting happened: maybe because AI is sitting too damn low in the uncanny valley I got angry at it. If the same thing would have been done by any other dev, we'd have laughed about it. Perhaps because I'd trust a another dev (optimistically? Naïvely?) to improve & learn I'd be gentler on them. A tool built on stolen knowledge by a trillion dollar corp to create an uncaring stats machine, didn't get much love from me.