this post was submitted on 01 Nov 2025
27 points (100.0% liked)

Self-hosting

3816 readers
2 users here now

Hosting your own services. Preferably at home and on low-power or shared hardware.

Also check out:

founded 3 years ago
MODERATORS
 

What's going on on your servers? Smooth operations or putting out fires?

I got some tinkering time recently and migrated most of my Docker services to Komodo/Forgejo. Already merged some Renovate PRs to update my containers which feels really smooth.

Have to restructure some of the remaining services before migrating them and after that I want to automate config backup for my OpnSense and TrueNAS machines.

top 26 comments
sorted by: hot top controversial new old
[–] imetators@lemmy.dbzer0.com 2 points 1 day ago

My milestones this week.

Proxy PiIt may be that I am dumb and couldnt setup wireguard on my pi3b+ and Beelink S12 pro. So I opted in with Tailscale and setup Dante socks5 proxy to use with my qBit. It works, and torrents won't use S12s local IP when pi3b+ is offline. Exactly what I want! I'll install pi3b+ in my homeland where torrenting is not punished yet and will sail the high seas.

The ArrsI have finally managed to successfully setup Radarr, Sonarr, Prowlarr to work with Jellyseerr. Will later add Bazarr to the stack. Took me a while to figure out how to set priorities to the language preferences but it works now as it seems. Next step: try to somehow get into German private trackers. It is where I live now so I need media with German audio to speed up learning German.

ImmichSomehow I had issue with Immich where my context search would fail to work. Took me half a day to find out that the problem was in the naming of machine learning docker. I swear I havent changed anything but somehow Immich had ML on http://immich-machine-learning:3003/ and it worked well before up until move to 2.0 stable. Appearently, my container is called immich_machine_learning and that was the issue. Renaming ML link in settings to http://immich_machine_learning:3003 fixed the issue. Why was it changed? Mistery.

I am pretty happy with my setup so far.

[–] keepthepace@slrpnk.net 8 points 2 days ago

Since I discovered that Freebox (the set-top box of a common ISP in France, without about 8 million installed) are welcoming to users running their own docker images, I wonder if I should not write a specific guide about how to install an alternative to Google services on these, and have a huge local impact...

[–] standarduser@lemmy.dbzer0.com 4 points 1 day ago (4 children)

I just purchased my first domain, so now i get to learn how to use that with a reverse proxy and setting up the routing. DNS already giving issues on home network so this'll be fun.

[–] imetators@lemmy.dbzer0.com 3 points 1 day ago

Small pro tip: if you going to use Nginx Proxy Manager - it has SSL cert generator built in that also auto-renews certs before they expire. Many people say that for security reasons, open only 443 port and close 80. But that will raise an issue where certs wont generate if port 80 is closed. Not sure if it works the same way with other reverse proxy managers. but it 100% does with NPM.

[–] SolarpunkSoul@slrpnk.net 3 points 1 day ago

Good luck! It took me a good while to figure out how that all works

[–] solbear@slrpnk.net 2 points 1 day ago (1 children)

Nice! I use Nginx Proxy Manager and found the process to be surprisingly simple, but if your public IP change often it could be annoying if you don't have some way to automate updates of DNS records. Let us know if you have any issues!

[–] standarduser@lemmy.dbzer0.com 1 points 12 hours ago

Thank you! I appreciate the extension. I think i'm in the right direction so far with it, just running in to a problem currently with getting my site to phone back to my device. it's odd that it isn't getting through. even with ports forwarded and reserved.

[–] lefaucet@slrpnk.net 2 points 1 day ago (1 children)

Is there something in particular tripping you up?

[–] standarduser@lemmy.dbzer0.com 1 points 12 hours ago

I do seem to be running in to one issue so far. I've got my * A record in the live dns pointing to my public IP and I do have ports open and reserved, just not seeming to be able to get it to work properly.

[–] F04118F@feddit.nl 2 points 2 days ago (1 children)

I'm hosting foundryvtt on a k8s cluster. I'm using Authelia+lldap to have only authenticated users passed on to that behemoth of a NodeJS app that is undoubtedly full of vulnerabilities.

I have Authelia set up to enforce 2FA for any request outside my users' home networks. Or so I thought, but one of my players kept getting asked to 2FA.

Turns out I forgot about IPv6. He connects over IPv6 by default.

[–] SolarpunkSoul@slrpnk.net 2 points 1 day ago (1 children)

I'm also running foundryvtt and have the basics of network security down but not much more. Is there anything I should be particularly wary of if I'm hosting it via a cloudflare tunnel for my group?

[–] F04118F@feddit.nl 1 points 1 day ago* (last edited 1 day ago)

You probably have your network locked down much better than me. That should work too.

For me, it was easier to set up authelia to limit access. I don't trust the "authorization" portal in foundry so I set up a real authentication proxy.

As a dev I've had experience with the developer culture and norms in different languages, and NodeJS stands out to me for invoking (other NodeJS) dependencies for even the smallest things.

Left-pad is the best illustration of this dependency culture. This also means vulnerabilities spread across the entire npm landscape instantly, since everything depends on almost everything else.

[–] Gobbel2000@programming.dev 5 points 2 days ago (1 children)

Did an oopsie. I never realized that after upgrading the OS, my certbot renew service to renew the HTTPS certificate always failed. So now I had an expired certificate. At least it was an easy fix by reinstalling certbot.

[–] tofu@lemmy.nocturnal.garden 1 points 2 days ago (1 children)

This is the second time someone mentions issues with certbot after an update, interesting. I'm glad I have my certificates monitored and will get alerted when the remaining time is <7d

[–] lefaucet@slrpnk.net 2 points 1 day ago (1 children)

Good idea, I should do this too

[–] tofu@lemmy.nocturnal.garden 1 points 1 day ago

Blackbox Exporter is nice if you have Prometheus running already, not sure what else people are using

[–] alzymologist@sopuli.xyz 10 points 3 days ago

Certbot complained about invalid files structure... after 18 successful updates over the years. Seriously, those guys should put a bit more care into updating stuff. Of course, the fix was trivial.

[–] dotslashme 8 points 3 days ago

Sunday is upgrade day, meaning I will update the OS on my servers and then update all my helm charts.

[–] ccryx@discuss.tchncs.de 6 points 3 days ago

I've finally pinned down my backup automaton:

  • All my services are in podman containers/pods managed by systemd.
  • All services are PartOf= a custom containers.target.
  • All data is stored on btrfs sub volumes
  • I created a systemd service that Conflicts=containers.target for creating read only snapshots of the relevant subvolumes.
  • That service Wants=borgmatic.service wich creates a borg backup of the snapshots on a removable drive. It also starts containers.target on success or failure since the containers are not required to be stopped anymore.
  • After borg backup is done, the repository gets rclone synced to an S3 compatible storage.
  • This happens daily, though I might put the sync to S3 on a different schedule, depending how much bandwidth subsequent syncs will consume.

What I'm not super happy about is the starting of containers.target via the systemd unit's OnSuccess= mechanism but I couldn't find an elegant way of stopping the target while the snapshots were being created and then restarting the target through the other dependency mechanisms.

I also realize it's a bit fragile, since subsequent backup steps are started even if previous steps fail. But in the worst case that should just lead to either no data being written (if the mount is missing) or backing up the same data twice (not a problem due to deduplication).

[–] Pencilnoob@lemmy.world 6 points 3 days ago

I've been hosting Home Assistant now for about nine months very smoothly. So far the only outage has been when I moved the server into a different room last week because it was cluttering up my office. Feels good to have it tucked away in a closet. I've got my thermostat, and all my upstairs lights running through it. I keep trying to get these moisture sensors to work for my plants but they just keep losing signal or something and rtl433 just stops seeing those but meanwhile detects every other damn hardware on the whole block. I've got 500+ entities that aren't my three sensors lol. Once I figure that out my Plant Cards dashboard will work again which is super cool.

Other than that, I've been hosting Jellyfin for home media and digging it. Much less success with the 'arr suite, it works and stays up but it really struggles to automatically find the weird-ass old niche media I'm looking for. I generally have to handle that part manually, but at least they are good as wishlist of what I'm looking for so I can use that just to keep track.

All in all these two servers (home assistant OS on an old laptop, jellyfin and arr on my former Ubuntu desktop) have been great and just work really without any issues.

Oh last week when I moved them into the closet, they got assigned new IP addresses, so I figured out how to lock them so my clients and bookmarks still work.

[–] haulyard@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

Run most of my containers on a synology 1621+. Immich, paperless, grafana-based monitoring stack, etc. Upgraded memory from 8GB to 32GB and it's a night and day difference. Enough that I'm probably not going to move forward with adding nvme storage for a cache. Wish I did it sooner.

Added donetick to the collection recently and the kids have really latched onto the points system and got them more engaged with helping around the house. Note that if you self host, the password reset function doesn't work. You have to update the hash in the databased directly. Not a big deal but it really shouldn't require that level of effort.

I use scrypted to pipe Unifi POE cameras into HomeKit. Not really a fire, but I'm having issues with notification timing be longer than it used to be.

And lastly, a very very different turn of events for me. I'm not a developer, but I recently did an experiment to see what AI could do in helping me create a web app that could be used to scan all the physical books we have. A catalog of sorts for us to be able to look up what we have by genre, bookshelf location, etc. 100% vibe coding. Took a few hours of back and forth but we have something working. Not sure I'll ever let the code see the light of day outside my network, but it did help me learn a tiny bit about coding.

[–] sunoc@sh.itjust.works 4 points 3 days ago (2 children)

Sunday

What’s good, Kiribati ? 🇰🇮

I made some effort recently to try and set a K3S cluster with Flux but my bunch of RPi are just not powerful enough for that (never managed to deploy Longhorn). I’m moving to a more reasonable Docker Swarm with Ansible.

[–] g5pw@feddit.it 2 points 1 day ago

By the way take a look at dockform for docker deploys, it’s something I’m keeping an eye on since I use sops to encrypt secrets.

[–] tofu@lemmy.nocturnal.garden 5 points 3 days ago (1 children)

Uh, yeah, that's right, I'm in Kiribati and absolutely didn't confuse the weekdays 🫣

Bit disappointing it doesn't seem to run on your Raspis, I thought that's basically what k3s is for? Is Longhorn the problem or what component exactly? Figuring if you are going to switch to Swarm, the payload isn't the issue

[–] sunoc@sh.itjust.works 3 points 3 days ago

I noticed bc I live very east and in general I’m off by one day in the other direction x)

I had two problems with my gitops setup:

  • Longhorn would fail to deploy, some container creation timeout and crashed on repeat for hours for unknown reason.
  • I never managed to have a working ingress neither using the built in Treafik, nor adding it afterwards.

Probably skill issue in both case, but the trial and error was slow and annoying, so I figured I would just upgrade my Ansible setup instead.

I bought a second USB SSD which has now become the second backup SSD. I ended up skipping my switch to Podman because I got invested in writing another script.

I'm not interested in having my backup drives automatically decrypt and mount at startup but those were the only guides I could find. I still want to manually type my password and wanted an easier way handle that.

I ended up writing this script which turned the 4 lines of code I was using before into a 400+ line single file script.

Once I pair it with my rsync script, I'll be able to remotely, automatically and interactively decrypt, mount, update my backup, unmount and re-encrypt my USB SSD. The script also has tests to make sure the mount directory is ready for use and not sending anything with rsync if the encrypted SSD is not mounted. I just finishes writing the script and now I have to integrate it into my systems.

I was originally going to add the second backup to my local-only network Pi server but I think I'll add it to my web facing Pi server so I am able to access it remotely. I would feel a lot more comfortable knowing that data on there isn't easily accessible because it's not auto-mounting.

Other than that, things are boring and boring is good.