This is an automated archive made by the Lemmit Bot.
The original was posted on /r/selfhosted by /u/sphiinx on 2025-02-04 00:17:03+00:00.
Don’t get me wrong—I absolutely love self-hosting. If something can be self-hosted and makes sense, I’ll run it on my home server without hesitation.
But when it comes to LLMs, I just don’t get it.
Why would anyone self-host models like Ollama, Qwen, or others when OpenAI, Google, and Anthropic offer models that are exponentially more powerful?
I get the usual arguments: privacy, customization, control over your data—all valid points. But let’s be real:
- Running a local model requires serious GPU and RAM resources just to get inferior results compared to cloud-based options.
- Unless you have major infrastructure, you’re nowhere near the model sizes these big companies can run.
So what’s the use case? When is self-hosting actually better than just using an existing provider?
Am I missing something big here?
I want to be convinced. Change my mind.