This is an automated archive made by the Lemmit Bot.
The original was posted on /r/selfhosted by /u/Consistent_Equal5327 on 2025-06-15 17:06:35+00:00.
Hey everyone,
Like many of you, I love self-hosting to keep control over my data. I started using LLM APIs for a few projects, but I was really uncomfortable with the idea of sending potentially sensitive user data (or my own secrets) to a third-party service.
I wanted a kill switch, something I could run on my own server to inspect and sanitize the data before it leaves my network.
So I built Trylon Gateway. It's a lightweight, open-source firewall specifically for LLMs. You run it yourself, and it acts as a proxy between your application and the actual AI provider (like OpenAI).
The whole thing is packaged up in Docker and runs with a simple docker-compose up. The models it uses for checks (~1.5GB) are stored in a persistent volume, so they only need to be downloaded once.
You can configure everything in a policies.yaml file to block profanity, specific keywords, PII, etc. You own the rules, you own the logs, you own the whole stack.