Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
You can probably run a 7b LLM comfortably in system RAM, maybe one of the smaller 13b ones.
Software to use
Models
In general, you want small GGML models. https://huggingface.co/TheBloke has a lot of them. There are some superHOT version of models, but I'd avoid them for now. They're trained to handle bigger context sizes, but it seems that made them dumber too. There's a lot of new things coming out on bigger context lengths, so you should probably revisit that when you need it.
Each have different strengths, orca is supposed to be better at reasoning, airoboros is good at longer and more storylike answers, vicuna is a very good allrounder, wizardlm is also a notably good allrounder.
For training, there are some tricks like qlora, but results aren't impressive from what I've read. Also, training LLM's can be pretty difficult to get the results you want. You should probably start with just running them and get comfortable with that, maybe try few-shot prompts (prompts with a few examples of writing styles), and then go from there.
Thank you. I did have
llama.cpp
in mind but didn't know where or how to start!Do these models have a limit on how much information they can injest and how much they can improve relative to the information fed to them?
Another thing, llama.cpp support offloading layers to gpu, you could try opencl backend for that for non-nvidia gpu's. But llama.cpp can also run on cpu-only, with usable speed. On my system, it does about 150ms per token on a 13b model.
koboldcpp is probably the most straight forward to get running, since you don't have to compile, it has a simple UI to set launch parameters, and it also have a web ui to chat with the bot in. And since it use llama.cpp it support everything that does, including opencl (clblast in launcher)
Thanks, I'll take a look