Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I may have been doing something wrong, but in my experience llama.cpp with openCL offloading isn't much faster than CPU only, it uses the same CPU usage with the addition of my GPU making typewriter noises.
I have written this gist to run fastchat-t5-3b-v1.0 using Intel's IPEX and it runs quite well, I have an A770 16GB but it seems to use under 8GB when using
bfloat16
. It could be easily be modified to run something else though.Or if you want a GUI (or a nice CLI), I've added support for Intel XPUs in FastChat.
Thanks, I'll take a look! A GUI is certainly very helpful :)