this post was submitted on 27 Oct 2025
444 points (99.3% liked)

Programmer Humor

27175 readers
1078 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] twopi@lemmy.ca 3 points 3 days ago

At first I thought it was the Beaverton. Had to check the URL.

[–] Routhinator@startrek.website 43 points 6 days ago

AI is opening so many security HOLES. Its not solving shit. AI browsers and MCP connectors are wild west security nightmares. And that's before you even trust any code these things write.

[–] docktordreh@discuss.tchncs.de 13 points 5 days ago

One of the most idiotic takes I've read in a long time

[–] IzzyScissor@lemmy.world 30 points 6 days ago (1 children)

Schrödinger's AI: It's so smart it can build perfect security, but it's too dumb to figure out how to break it.

[–] chicken@lemmy.dbzer0.com 3 points 6 days ago (2 children)

If there are actually no bugs, can't that create a situation where it's impossible to break it? Not to say this is actually a thing AI can achieve, but it doesn't seem like bad logic.

[–] IzzyScissor@lemmy.world 8 points 6 days ago

Even if there's such a thing as a program without bugs, you'd still be overlooking one crucial detail - no matter the method, the end point of cybersecurity has to interface with humans. Humans are SO much easier to hack than computers.

Let's say you get a phone call from your boss - It's their phone number and their voice, but they sound a bit panicked. "Hey, I'm just about to head into a meeting to close a major deal, but my laptop can't access the server. I need you to set up a temporary password in the next two minutes or we risk losing this deal. No, I don't remember my backup - it's written down in my desk but the meeting is at the client's office."

You'd be surprised how many people would comply, and all of that can be done by AI right now. It's all about managing risk - there's never going to be a foolproof system.

[–] bss03 1 points 6 days ago (1 children)

Rice's Theorem prevents this... mostly.

[–] chicken@lemmy.dbzer0.com 2 points 6 days ago

Another way of working around Rice's theorem is to search for methods which catch many bugs, without being complete.

I'd guess that hypothetical AI cybersecurity verification of code would be like that, where there are probably no bugs, but it's not a totally sure thing. But even if you can't have mathematical certainty there are no bugs, that doesn't mean every or most programs verified this way are possible to be exploited.

[–] skuzz@discuss.tchncs.de 10 points 5 days ago

All these brainwashed AI-obsessed people should be required to watch I, Robot on loop for a month or two.

[–] biotin7@sopuli.xyz 16 points 6 days ago (1 children)

Because then Security would be non-existent.

[–] VonReposti@feddit.dk 13 points 5 days ago

The S in AI stands for security.

[–] Randelung@lemmy.world 15 points 6 days ago

ahahahaha

Oh, you're serious. Let me laugh even harder.

AHAHAHAHA

[–] Blackmist@feddit.uk 13 points 6 days ago

Ron Howard narrator: Actually, they would need more.

[–] belated_frog_pants@beehaw.org 5 points 5 days ago

Because its doing so so well now unattended...

[–] HulkSmashBurgers@reddthat.com 6 points 6 days ago

The look on her face in the thumbnail matches the title perfectly.

[–] MashedTech@lemmy.world 6 points 6 days ago

Who is paying her?

[–] flamekhan@lemmy.world 4 points 6 days ago

People who say these things clearly have no experience. I spent an hour today trying to get one of the better programming models to parse a response. I gave it the inputs and expected outputs and it COULD not derive functional code until I told it what the implementation needed to be. If it isn't cookie-cutter problems then it just can't predict it's way through it.

[–] Itdidnttrickledown@lemmy.world 5 points 6 days ago (1 children)

AI might pull her head our of her ass... eventually.

[–] Reginald_T_Biter@lemmy.world 1 points 6 days ago

At this point we need to pull their heads out of our asses

[–] fu@libranet.de 1 points 6 days ago

@cm0002 #nowplaying Absolutely Right - Five Man Electrical Band (Absolutely Right: The Best of Five Man Electrical Band)

[–] TheReturnOfPEB@reddthat.com 177 points 1 week ago (5 children)

couldn't ai, then also, break code faster than we could fix it ?

[–] ronigami@lemmy.world 2 points 5 days ago

It’s like the “bla bla bla, blablabla… therefore God exists”

Except for CEOS it’s “blablablabla, therefore we can fire all our workers”

Same shit different day

[–] NuXCOM_90Percent@lemmy.zip 45 points 1 week ago* (last edited 1 week ago)

I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.

Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor... if your models are good (big ol' "if" that).

Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn't overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.

But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.

[–] PattyMcB@lemmy.world 19 points 1 week ago (3 children)

AI WRITES broken code. Exploiting is is even easier.

load more comments (3 replies)
load more comments (2 replies)
[–] 30p87@feddit.org 96 points 1 week ago* (last edited 1 week ago) (5 children)

Genius strategy:

  • Replace Juniors
  • Old nerds knowing stuff die out
  • Now nobody knows anything about programming and security
  • Everything's now a battle between LLMs
[–] jaybone@lemmy.zip 19 points 1 week ago (1 children)

I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

I can wait until I have to do the same thing for AI generated code.

load more comments (1 replies)
load more comments (3 replies)
[–] bleistift2@sopuli.xyz 70 points 1 week ago (2 children)
[–] thesmokingman@programming.dev 4 points 6 days ago

The current administration believes the same stuff. She left with the admin change yet agrees with things like the current admin’s approach to AI regulation.

[–] Susaga@sh.itjust.works 22 points 1 week ago (2 children)

I wonder why they don't work there anymore...

load more comments (2 replies)
[–] itkovian@lemmy.world 62 points 1 week ago

Execs and managers showing Dunning-Kruger in full effect.

[–] DupaCycki@lemmy.world 45 points 1 week ago (1 children)

At this point, they're just rage baiting and saying random shit to squeeze that bubble before it bursts.

[–] Hudell@lemmy.dbzer0.com 4 points 6 days ago

They are just afraid that a competitor may find some way of actually benefiting from AI before they do.

[–] HazardousBanjo@lemmy.world 34 points 1 week ago

As usual, the biggest advocates for AI are the ones who understand its limitations the least.

[–] violentfart@lemmy.world 32 points 1 week ago
[–] Mikina@programming.dev 32 points 1 week ago* (last edited 1 week ago) (2 children)

I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

When I got into the team, the "inhouse custom malware" was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this "revelation" got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

And even then, we were able to do kind of OK in engagements, because the customers didn't know and also didn't care. I was always able to come up with "lessons learned", and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it's now just AIs talking to the AIs then, well, I don't think much would change.

But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(

load more comments (2 replies)
[–] onlinepersona@programming.dev 31 points 1 week ago (4 children)

I tried using AI in my rust project and gave up on letting it write code. It does quite alright in python, but rust is still too niche for it. Imagine trying to write zig or Haskell, it would make a terrible mess of it.

Security is an afterthought in 99.99% of code. AI barely has anything to learn from.

[–] funkless_eck@sh.itjust.works 4 points 6 days ago

Even in Python you have to keep it siloed. You have to drip feed it pieces because if you give it the whole script it'll eat comments, straight up chop out pieces so you end up with something like

 def myFunction():
      # ...start of your function here...

replacing actual code.

[–] wiegell@feddit.dk 2 points 6 days ago (1 children)

Mitchell Hashimoto writes a lot of Zig with AI (and this interview is almost a year old), see: https://www.youtube.com/watch?v=YQnz7L6x068&t=490s How long since you have tried tools? I think there has been some pretty astounding progress during the last couple of months. Until recently i did not use it daily, but now I just cant ignore the efficiency boost it gives me. There are definitely security concerns, and at this point you should not trust code that you do not read/understand, but tbh. i'm starting to believe that AI might (at least in the short term) free up resources to patch stuff and implement security features, that otherwise was not prioritised before due to focus on feature development. What it does to the IT sector in the long run - who knows...

[–] onlinepersona@programming.dev 1 points 5 days ago (1 children)

That video showed him saying that it's good for autocomplete. But speaking from experience testing it on Rust, Python, JS, HTML and CSS, it performed the worst on Rust. It wrote tests well, but sucked at features or refactoring. Whether the problem is between the chair and the screen, I don't know.

Whether AI will be able to write secure code, I dunno, I haven't tried. It could be put into the rules to consider security and add tests relating to security or add an adversarial agent that tries to find flaws in the code which can be exploited. That could probably do more than a developer who has no time assigned to care about testing, much less security.

What it does to the IT sector in the long run - who knows…

Agreed. Things are moving so quickly, it's impossible to predict. There are lots of people on LinkedIn screaming about obsoletion of humans or other bold claims, but to me they are like drunk fortune tellers: tell enough fortunes and one is bound to be right.

[–] wiegell@feddit.dk 2 points 5 days ago

My naive hope is that local models or maybe workplace distributed clusters catch up and the cloud based bubble bursts. I am under the impression, that atm. a big difference as to whether a tool works well or not is more related to how well all the software around the actual llm is constructed. E.g. for discovery - being able to quickly ingest an internet url and access a web index is a big force of the cloud based providers atm. And for coding it has a lot to do with quickly searching and finding the relevant parts of the codebase and evaluate whether the llm has all the required information to correctly perform a task.

[–] krooklochurm@lemmy.ca 35 points 1 week ago (3 children)

If you're using Hannah Montana Linux you can just open a terminal and type "write me ____ in the language ____" and the Hannai Montanai will produce perfectly working code every time.

load more comments (3 replies)
load more comments (1 replies)
[–] deadbeef79000@lemmy.nz 29 points 1 week ago (1 children)

Ha ha ha ha ha!

Oh wait, you're serious. Let me laugh even harder.

HA HA HA HA HA!

[–] Hudell@lemmy.dbzer0.com 4 points 6 days ago

But it's true. Security teams will be pointless once things become completely unsecurable.

[–] melfie@lemy.lol 20 points 1 week ago
[–] rozodru@pie.andmc.ca 20 points 1 week ago

Not with any of the current models, none of them are concerned with security or scaling.

load more comments
view more: next ›