Happy to answer questions about the setup.
Tell me about the hardware, please and thank you.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Happy to answer questions about the setup.
Tell me about the hardware, please and thank you.
I do something similar with the base model m4 Mac mini. It's my inference box right now, it handles Immich ML, photo prism AI, and runs Ollama talking to a small web app I call to summarize things. It's summaries are shit. The bigger the model, the more it hallucinates. So I settle for 1B and 4th grade responses
Piggybacking too as I am considering the same. Please OP and thank you.
And what model class are you using? Lightweight (2B), reasonable ~10B or above 32B?
Do they load fast?
I had a look at NetworkChucks setup and don't think I can afford an overpowered rig in this economy. Depending on the rig, may have to wait >20s for a prompt answer.
Thank you again!
I was playing with ministral-3 3b on a 3060. It loads pretty quick, but response generation is a bit slow. It starts responding nearly instantly once the model is loaded (which is also quick), but for long responses (~5 paragraphs) it may take 15-20 seconds for the whole thing.
Cries in 1070
I run llms using a 780m you'll be fine. I get pretty close to 10 tokens a second for larger 20B+ models.
I'd still give it a shot. A quick check of benchmarks suggests it's not that much slower. I don't know if that extends to ML computation though.
I really like n8n. It appeals to my visual sense which makes up for a lot of hard programming experience. I don't run it full with the AI aspect. Not because I have some agenda against AI, but that my equipment is not good enough to run AI efficiently. I use it for a lot of automation around the lab.
Has anyone tried ActivePieces? How does it compare?
Briefly. I didn't like it as much as I like n8n. Perhaps it was not suitable to my use case. I hear a lot of good things about ActivePieces tho. You know, give it a spin and see if it gehaws with your flow. From what I understand, both can acomplish about the same. I think ActivePieces is geared more towards cloud deployments whereas n8n keeps things local.
Ollama has long history of exploits. PLease do not feed anything which come from outside to it.