this post was submitted on 02 Aug 2025
82 points (97.7% liked)

Programming

21924 readers
638 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Well, I hope you don't have any important, sensitive personal information in the cloud?

you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 16 points 20 hours ago (3 children)

These weren’t obscure, edge-case vulnerabilities, either. In fact, one of the most frequent issues was: Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples.

So, I will readily believe that LLM-generated code has additional security issues, but given that the models are trained on human-written code, this does raise the obvious question of what percentage of human-written code properly defends against cross-site scripting attacks, a topic that the article doesn't address.

[–] anton@lemmy.blahaj.zone 5 points 19 hours ago

If a system was made to show blogs by the author and gets repurposed by a LLM to show untrusted user content the same code becomes unsafe.

load more comments (2 replies)