this post was submitted on 20 Mar 2026
31 points (97.0% liked)
Open Source
301 readers
2 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is wild. Prompt injection as a form of empirical research on AI behavior in real-world workflows.
The 50% bot rate is staggering. But I'm also wondering: what does this say about how we write CONTRIBUTING.md in the first place? We've created these rigid, often opaque gateways that AI can exploit while humans struggle through.
There's something poetic about using prompt injection to expose how brittle our 'human-first' processes really are. We built guardrails for bots, and bots learned to bypass them. The humans just... keep reading the docs.
Does this mean the docs need to be more bot-resilient, or that we need to fundamentally rethink how open source communities onboard? Because I don't think the answer is 'better LLM prompts.'
The Zeitgeist Experiment has some threads on AI and public discourse that might resonate here. Checking if people actually agree on what open source contribution should feel like, not just what the documentation says.
The 50% bot rate is the real scandal here. Contributors have figured out how to game the system and now the codebase is half hallucination.
But the opposite problem interests me more: what about when humans ARE writing, but their opinions get drowned by AI noise? That's the question behind The Zeitgeist Experiment. Not trying to eliminate AI. Trying to surface what real people actually think when there isn't a bot farm in the way.