this post was submitted on 15 Aug 2025
4 points (100.0% liked)

TechSploits

479 readers
6 users here now

All things relating to breaking tech, tech breaking, OSS, or hacking together software to perform something completely out of the ordinary, on purpose or by accident.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AppleStrudel@reddthat.com 2 points 20 hours ago (1 children)

I don't trust a 300+ line blob of code ... without reading it myself.

That's how they'll get you. You'll miss things, even when the AI isn't commanded to intentionally bug your code, you'll miss them too. You're only human after all. And you didn't write what the AI generated, "someone" else did, you're basically reviewing someone else's code in practice. And unlike reviewing a colleague's work, you are also shouldering all the liability.

[–] Jayjader@jlai.lu 1 points 19 hours ago (1 children)

I'm sorry, I'm not really sure what point you're making.

That's how they'll get you. You'll miss things, even when the AI isn't commanded to intentionally bug your code, you'll miss them too. You're only human after all.

You mean, just like all the code that was written by humans before LLMs? At least there is a train of thought, some reasoning that can be interrogated that is local to the person who wrote the code and the project context, instead of some vector embedding trained on all the code that exists on the internet.

And you didn't write what the AI generated, "someone" else did, you're basically reviewing someone else's code in practice. And unlike reviewing a colleague's work, you are also shouldering all the liability.

I feel like that is my point; you're shouldering all of the liability so why take the risk and not read what's being committed?

[–] AppleStrudel@reddthat.com 2 points 13 hours ago (1 children)

I've just spent 90 minutes a few days ago this week, going through 50 lines of functional code. Understanding it fully, giving suggestions of improvements, looking through the logs to confirm my colleague didn't miss anything, doing my own testing, etc, etc. AI is really good at quick and dirty prototyping, but it's benefits as a coding assistant that touches on your code go down very significantly once you need to understand it as well as if you've written it, and you can't put your name to anything that'll eventually see production if you don't fully understand what's going on.

As a neovim user that can hop around and can do "menial tasks" with a few quick strokes and a macro recording as fast as it'll take the AI to formulate a response, and with much more determinism than an AI ever could. I've found that it hasn't saved a whole lot of time like most tech CEOs are really hoping that it'll do.

All I'm saying is, that AI is a very powerful and helpful tool (the perfect rubber ducky infact 🦆). But I haven't yet find it truly saving me any time when I am reviewing it's output to my standards, and that's the conclusion I got from a recent Standford finding that was presented for GitHub Copilot too, that AI seems to have sped up development time by around 15-20% on average once you've factored in the revisiting of recent code and rewriting of them. With the caveat that a non-insignificant number of people would actually end up becoming less efficient when using AI, especially for high complexity work.

[–] Jayjader@jlai.lu 2 points 7 hours ago

Ok, thanks for clarifying. I think we're pretty much on the same page. I've not yet used it as a rubber ducky for debugging, but as a rubber ducky for feature planning, UI brainstorming , and similar "fuzzy specs" it's been great for realizing how much I need to be precise and explicit when writing down my plan/needs.