this post was submitted on 18 Jan 2026
3 points (100.0% liked)
blueteamsec
627 readers
37 users here now
For [Blue|Purple] Teams in Cyber Defence - covering discovery, detection, response, threat intelligence, malware, offensive tradecraft and tooling, deception, reverse engineering etc.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's a pattern I see everywhere LLMs are being used: they spread.
People who are much deeper into this tell me that the LLM checking for prompt injections isn't itself vulnerable to prompt injections, but I remain unconvinced.
It's ~~turtles~~ LLMs all the way down