Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I have been putting off scheduling my next therapy appointment.
I'd avoid super short USB drives if you can as they tend just to be SD cards in disguise.
If possible for dB stuff I would recommend using actual drives as lots of reads and writes will very quickly wear out most removable storage devices.
Depending on what you're going to do with the cluster, 4gb of RAM per node feels rather limiting.
Anyways, as far as storage goes I'm using 4 compute blades loaded with 4 8gb RAM versions of the CM4, each with a 500gb Samsung PM9A1 running Talos to save a bit on that precious RAM.
Got Talos up and running with some help from Onedr0p's cluster template which saved me a lot of time on the learning curve.
i ran this setup for years! One controller, three worker nodes, all Pi4b's. Someone mentioned that 4gb would be rather limiting; it certainly can be, but I never hit the ram ceiling. For me, disk writes and cpu were the noticeable constraint. In my first iteration I was using the fastest sandisk extreme pro whatever sd cards were at the time. My second iteration was running all hosts on usb-sata enclosures, and this was a huge improvement. I really can't recommend that route enough. If you can commit to a little cable management and maybe figure out something clean for stacking or standing the enclosures, it doesn't have to look terrible.
Regarding matching hardware, for a year I ran an old i5 lenovo thinkpad as just another worker node. It was fine, and was a pretty useful experience in running a cluster with mixed architecture. The only hiccups were those that come with a headless laptop setup. Sometimes rebooting could be dicey, stuff like that.
The only databases I was running at the time were sqlite (for the various *arrs). These would corrupt every few months, but these were not running on the sd cards, but on ssds and mounted over nfs. So yeah, don't go running sqlite over nfs.
edit: I imagine the warning about not running databases on the cards are about prematurely wearing out the cards. Seems like there are a few pi-oriented projects that lean on sqlite, though, so I'm not sure.
edit: Also just remembered that I experimented with running one node on one of those ssds that are pressed into the form factor of a usb stick. Again, sandisk extreme pro line, 128gb. I ended that after getting total freezes every few weeks. I can't say whether it was a faulty device or some incompatibility at play. I never did proper benchmarks with this against the ugreen sata usb enclosures, but it certainly did not feel any faster than the enclosures.
I've got a Latitude XT2. some of the plastic bits aged badly and developed a sort of weird film, including the charger brick; if you've had the same problem, I scrubbed the hell out of those specific parts with goo-gone and a microfiber cloth and its smooth and shiny again.