this post was submitted on 31 Oct 2025
1031 points (98.8% liked)
Programmer Humor
27193 readers
1037 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Shit code review is not code review. If you just rubber stamp everything or outsource it to someone who will, you aren't doing code review.
Aside from that:
Citation requested
If I had a nickle for every single time I had to explain to someone that their unit test doesn't do anything or that they literally just copied the output and checked against it (and that they are dealing with floating points so that is actually really stupid)... I'd probably go buy some Five Guys for lunch.
Its like saying that the problem is that you are using robots to assemble cybertrucks rather than people. The problem isn't who is super glueing sharp jagged metal together. The problem is that your product is fundamentally shite and should never have reached production in the first place. And you need to REALLY work through your design flows and so forth.
I keep seeing it over and over again. Anyone that actually has to deal with coworkers using this bullshit that isn't also in the cult is going to recognize it.
Sure, there have always been better and worse developers. LLMs are making developers that used to be better, worse.
Bad developers just do whatever. It doesn't matter if they wrote the code themselves or if a tool wrote it for them. They aren't going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.
Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you... those kids aren't writing much better code than chatgpt and there is a reason so many of them have embraced it.
But it also fundamentally changes the conversation. It stops being "We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code" and instead "We need generative AI to be better".
It was the exact same thing with "AI can't draw hands". Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the "studio ghibli filter" was made? It took off like hotcakes because "Now AI can can do hands!" and there was no thought towards the actual implications of generative AI.
Nothing outside of the first paragraph here is terribly meaningful, and the first paragraph is just trying to talk past what I said before. I'll reiterate, very clearly.
I have observed several of my coworkers that used to be really good at their jobs, get worse at their jobs (and make me spend more ensuring code quality) since they started using using LLM tools. That's it. That's all I care about. Maybe they'll get better. Maybe they won't. But right now I'd strongly prefer people not use them, because people using them has made my experience worse.
I know it's not related, curious about this part.
I know it has an aluminum based frame which should inhibit it's use to haul heavy loads, but what else?