Self-Hosted Alternatives to Popular Services

218 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
26
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/jsiwks on 2025-07-31 18:21:59+00:00.


TL;DR: Pangolin Clients (nicknamed "Olm") are a CLI-based peer-to-peer or relay VPN solution built on a hub-and-spoke model, using Newt as the hub for secure connectivity without opening ports.

We developed Pangolin clients. They’re a simple way to use Newt as a VPN jump point into your networks. We decided to release a basic version to the community to see if it’s something others find useful. If it is, we’ll continue to refine and expand it! If not, that’s fine too. Our focus remains on making Pangolin the best self-hosted remote access tool available.

So, what are Pangolin Clients? They’re a lightweight, VPN solution built on a hub-and-spoke model. Unlike mesh-based systems like Tailscale or NetBird, your Newt site acts as the hub, and the clients are the spokes. Just like how Newt provides browser based connectivity without opening ports, this provides VPN capabilities without opening ports. Right now, the clients are minimal, CLI-only for macOS, Windows, and Linux. They’re yet not tied to users; instead, you define a client much like you define a site in Pangolin with secret credentials. 

You can grant a client access to one or more sites (enabled with a --accept-clients in Newt) and control which resources it can address or allow it to access everything on the network. Data relays through Gerbil on your VPS, but using --holepunch you can have them attempt NAT hole-punching for direct connections.

Why should I use this instead of Tailscale? 

You probably shouldn't! If Tailscale works for you then use it! It has a much nicer client and is probably just better. If all you are doing is using it to manage your server - maybe give clients a try!

This feature is still in its early stages, but it opens up some interesting use cases: connecting multiple networks (e.g., home, office, or cloud VPCs), using Newt as a jump box for SSH remote management or other remote access, or creating a lightweight VPN alternative for secure connectivity. We’re excited to see how the community uses it and will continue to build on this foundation if it proves valuable. Let us know your thoughts!

You can try clients right now by updating to 1.8.0! Make sure to follow the update guide because you have to update all of the components.

https://preview.redd.it/neek6li649gf1.png?width=1200&format=png&auto=webp&s=e92ede421fb8bd61b7508612df75f2b584f93614

27
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Creisel on 2025-07-31 16:35:23+00:00.


https://blog.ui.com/article/introducing-unifi-os-server

28
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/taylorwilsdon on 2025-07-30 19:43:06+00:00.


For reasons I'll never understand, GitHub's API provides only 14 days of traffic data. Even with a paid version, I can still only get that rolling two week window so I spun up a cli tool to automatically fetch and store in a sqlite db. One thing lead to another, and before I knew it there was a UI and eventually a web service.

Curious to hear what folks think! Repo here - has a nice little guide for super simple Google App Engine deployment if you want a free place to run it persistently, or you can host it locally as well. Creates shields.io badges with your clone stats, even has a little analytics dashboard! Love any feedback folks have, it's MIT license so feel free to use, abuse and steal as you please.

29
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bambibol on 2025-07-31 09:15:13+00:00.


Hey everyone.

After using my NAS as storage for many years, running Plex and (painstakingly, in hindsight) adding media by hand, I finally dove into the deep end of selfhosting earlier this year and i'm LOVING it. I started with the r/MediaStack stuff that seemed interested to me, then started looking at all sorts of apps that could be relevant to me from Firefly III to HomeAssistant. Still the tip of the iceberg I'm guessing.

Anyway, my question is the following: How do you all keep track of the setups you're running? I don't mean is it running and properly (with tools like Uptime Kuma or Portainer), but more in the sense of what did you do when installing this? how did i set up this one?

For example, when one of my mediastack containers needs a restart I need to do a restart of the whole stack in order to get the -arrs running through Gluetun; and when an auto-import on Firefly III didn't work I can do XYZ to do a manual one. Small things or quirks you gotta remember that might be unique for your personal setup even.

Most of these are currently are fresh in my head but the more stuff I install, the more I gotta remember; and at some point I might be busy with other stuff and not have time to keep to my homelab as much as I do now.

So, how do you all keep track of this info about your own homelab?

And what are the things that I definitely gotta document? At the moment it's a messy text file with stuff like "run Kometa for movies with command: docker exec -it kometa python3 kometa.py --config /config/config.yml --library "Movies" but in all honesty, looking at that now, i'm already wondering like wait wouldn't I have to cd into a specific folder to run this? 😅 So yeah...

Is there a nice tool for this, or does anyone have tips/tricks for me?

30
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BattermanZ on 2025-07-31 08:19:49+00:00.


Hello dear selfhosters,

I recently started my Proxmox journey and it's been a blast so far. I didn't know I would enjoy it that much. But this also means I am new to VMs and LXCs.

For the past couple of weeks, I have been exploring and brainstorming about what I would need and came up with the following plan. And I would need your help to tell me if it makes sense or if some things are missing or unnecessary/redundant.

For info, the Proxmox cluster is running on a Dell laptop 11th gen intel (i5-1145G7) with 16GB of RAM (soon to be upgraded to 64GB).

The plan:

  • LXC: Adguard home (24/7)
  • LXC: Nginx Proxy Manager (24/7)
  • VM: Windows 11 Pro, for when I need a windows machine (on demand)
  • VM: Minecraft server via PufferPanel on Debian 12 (on demand)
  • VM: Docker server Ubuntu server 24.04 running 50+ containers (24/7)
  • VM: Ollama server Debian 12 (24/7)
  • VM: Linux Mint Cinnamon as a remote computer (on demand)
  • a dedicated VM for serving static pages?

So what do you think?

Thanks!

31
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/my_name_is_ross on 2025-07-30 16:48:03+00:00.


I’m lucky that I’m not on a cgnat, and I have a static ip.

My lab is a three server proxmox cluster, and I’m using a unfi fibre router.

I’ve used cloudflare tunnels to expose the few public software I was running but I’ve switched to pangolin on a vps but it got me thinking why don’t I just run it locally?

I understand I’m exposing my public ip (unless I proxy it via cloudflare) but is that really a concern?

I have set pangolin up with a bouncer for traefik and I could easily setup one for UniFi too.

So, should I host pangolin locally and not bother with the newt part or am I missing some other benefit of hosting it on a VPS?

32
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/su_ble on 2025-07-30 17:36:08+00:00.


Hi, fellow selfhosters,

just released the next step on the way to craft a nice tool, you can now block ips directly from the report and have them in your own blocklist until you "release" them, so you have nice control for your own blocklist within). Works with UFW right now, no other firewall is supported, but working on it.

You can also get the # of times an IP was reported on AbuseIPDB when you have an API Key. (optional Feature)

If still interested, have a look at

https://github.com/SubleXBle/Fail2Ban-Report

33
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/hhftechtips on 2025-07-30 08:01:27+00:00.


Many of us here rely on Traefik for our setups. It's a powerful and flexible reverse proxy that has simplified how we manage and expose our services. Whether you are a seasoned homelabber or just starting, you have likely appreciated its dynamic configuration and seamless integration with containerized environments.

However, as our setups grow, so does the volume of traffic and the complexity of our logs. While Traefik's built-in dashboard provides an excellent overview of your routers and services, it doesn't offer a real-time, granular view of the access logs themselves. For many of us, this means resorting to docker logs -f traefik and trying to decipher a stream of text, which can be less than ideal when you're trying to troubleshoot an issue or get a quick pulse on what's happening.

This is where a dedicated lightweight log dashboard can make a world of difference. Today, I want to introduce a tool that i believe it can benefit many us: the Traefik Log Dashboard.

What is the Traefik Log Dashboard?

The Traefik Log Dashboard is a simple yet effective tool that provides a clean, web-based interface for your Traefik access logs. It's designed to do one thing and do it well: give you a real-time, easy-to-read view of your traffic. It consists of a backend that tails your Traefik access log file and a frontend that displays the data in a user-friendly format.

Here's what it offers:

  • Real-time Log Streaming: See requests as they happen, without needing to refresh or tail logs in your terminal.
  • Clear and Organized Interface: The dashboard presents logs in a structured table, making it easy to see key information like status codes, request methods, paths, and response times.
  • Geographical Information: It can display the country of origin for each request, which can be useful for identifying traffic patterns or potential security concerns.
  • Filtering and Searching: You can filter logs by status code, method, or search for specific requests, which is incredibly helpful for debugging.
  • Minimal Resource Footprint: It's a lightweight application that won't bog down your server.

Why is this particularly useful for Pangolin users?

For those of you who have adopted the Pangolin stack, you're already leveraging a setup that combines the Traefik with WireGuard tunnels. Pangolin is a fantastic self-hosted alternative to services like Cloudflare Tunnels.

Given that Pangolin uses Traefik as its reverse proxy, reading logs was a mess. While Pangolin provides excellent authentication and tunneling capabilities, having a dedicated log dashboard can provide an insight into the traffic that's passing through your tunnels. It can help you:

  • Monitor the health of your services: Quickly see if any of your applications are throwing a high number of 5xx errors.
  • Identify unusual traffic patterns: A sudden spike in 404 errors or requests from a specific region can be an early indicator of a problem or a security probe. (
  • Debug access issues: If a user is reporting problems accessing a service, you can easily filter for their IP address and see the full request/response cycle.

How to get started

Integrating the Traefik Log Dashboard into your setup is straightforward, especially if you're already using Docker Compose. Here’s a general overview of the steps involved:

1. Enable JSON Logging in Traefik:

The dashboard's backend requires Traefik's access logs to be in JSON format. This is a simple change to your traefik.yml or your static configuration:

accessLog:
  filePath: "/var/log/traefik/access.log"
  format: json

This tells Traefik to write its access logs to a specific file in a structured format that the dashboard can easily parse.

2. Add the Dashboard Services to your docker-compose.yml**:**

Next, you'll add two new services to your existing docker-compose.yml file: one for the backend and one for the frontend. Here’s a snippet of what that might look like:

  backend:
    image: ghcr.io/hhftechnology/traefik-log-dashboard-backend:1.0.2
    container_name: log-dashboard-backend
    restart: unless-stopped
    volumes:
      - ./config/traefik/logs:/logs:ro
    environment:
      - NODE_ENV=production
      - TRAEFIK_LOG_FILE=/logs/access.log

  frontend:
    image: ghcr.io/hhftechnology/traefik-log-dashboard-frontend:1.0.2
    container_name: log-dashboard-frontend
    restart: unless-stopped
    ports:
      - "3000:80"
    depends_on:
      - backend

A few things to note here:

  • The backend service mounts the directory where your Traefik access logs are stored. It's mounted as read-only (:ro) because the backend only needs to read the logs.
  • The TRAEFIK_LOG_FILE environment variable tells the backend where to find the log file inside the container.
  • The frontend service exposes the dashboard on port 3000 of your host machine.

Once you've added these services, a simple docker compose up -d will bring the dashboard online.

Github-Repo

RoadMap- Tie Routes with resources in pangolin to have a better insight. (done v1.0.2)

A note on security:

As with any tool that provides insight into your infrastructure, it's a good practice to secure access to the dashboard. You can easily do this by putting it behind your Traefik instance and adding an authentication middleware, such as Authelia, TinyAuth, or even just basic auth. This is a standard practice, and it's a great way to ensure that only you can see your traffic logs. Use Middleware manager

In conclusion

For both general Traefik users and those who have embraced the Pangolin stack, the Traefik Log Dashboard is a valuable addition to your observability toolkit. It provides a simple, clean, and effective way to visualize your access logs in real-time, making it easier to monitor your services, troubleshoot issues, and gain a better understanding of your traffic.

If you've been looking for a more user-friendly way to keep an eye on your Traefik logs, I highly recommend giving this a try. It's a small change to your setup that can provide a big improvement in your day-to-day operations.

34
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Deep-Dragonfly-3342 on 2025-07-30 01:41:58+00:00.


I have a small web server that I would like to host for free (because I wont be making any money off of it, its a coding project for the resume) and I tried hosting on oracle cloud but realized that although they claim we can get 4 OCPU on arm and 24 gb of ram, I tried provisioning a machine but I was constantly met with "no available resources".

This is why I must make the switch to other services that might offer less resources but could offer more reliability. I was wondering whether aws or any other "always free" hosting service might be better in terms of actually provisioning a machine and having it work reliably.

The thing with AWS though, is that I will probably need a dedicated db service because the allotted memory and storage is probably not enough for my spring boot server, so if anyone has experience with always free instances on aws db service as well be sure to let me know!

35
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FckngModest on 2025-07-29 19:37:35+00:00.


Context

  • I have a few Google calendars: work, private, and family.
  • Due to the security policy on my work Google account, I can see only "busy" timeslots when I subscribe to it via my personal Google account.
  • If I go on vacation and set "Out of office" in my work calendar, it screws up my personal calendar since it shows just plain "busy" for a day/week/etc. I had to turn off my work account during a vacation, and don't forget to turn it on again after.
  • Sometimes I have a duplicating event in multiply accounts.

Question

Are there any existing solutions to generate a calendar (even read-only is fine) that I can connect to my Google Calendar and it not just merges all events from all the accounts but allows me to set rules for merging and discarding events?

For example, I could have just discarded all "Out of office" events since this is only information for my colleagues. I don't need to see it in my Unified calendar.

Self-hosted, of course. :)

Clarification

  • I don't want to replace Google Calendar. I want to a service that gives me a link like https://mydomain.org/calendar/<random ID>/ics which I can then import into my Google account
  • Just a mobile app and separate WebUI aren't enough because they won't allow me to see my events via my Galaxy Watch, for example. And also will ignore all other integrations that I use via my Google account.

P.S.: Please avoid work-life balance advice. I have my own reason to keep these accounts joined, and I have my own agreements with my manager. Don't worry, I don't work overtime. ;)

P.P.S: Please don't suggest workarounds. I live with a compromise already and I seek for a better and more flexible solution now. :)

36
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/pipipipopopo on 2025-07-29 16:58:50+00:00.


I’m happy to announce a new version of Dockpeek 🔗 https://github.com/dockpeek/dockpeek

Since my last post here, I’ve added some new features and improvements thanks to your suggestions and ideas:

Major new additions:

  • Socket proxy support – connect securely to remote Docker hosts via socket-proxy
  • Multi Docker Hosts Support – view port mappings from multiple Docker servers in one dashboard
  • Image Update Checking – automatically detects when a newer image is available and flags it with an update indicator

What is Dockpeek?

Dockpeek is a lightweight, self-hosted Docker dashboard that allows you to view and access exposed container ports through a clean, click-to-access interface. It supports both local Docker sockets and remote hosts via socket-proxy, making it easy to keep an eye on multiple Docker environments from a single place.

It also includes image update checking, so you can quickly see if newer versions of your container images are available.

repo: https://github.com/dockpeek/dockpeek

37
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/remvze on 2025-07-29 20:22:00+00:00.


TL;DR: Live Version // Repo

Hello everyone! The creator of Moodist here.

Before getting to Haus, I want to thank you all for your tremendous support for Moodist and my other projects, it means the world to me!

Moodist started as a very simple ambient sound player. Over time, it has grown into a suite of productivity tools: from a notepad to a to-do list to a Pomodoro timer. But it has always maintained its focus on ambience, even keeping the other tools tucked away in the menu.

I've always wondered what it would look and feel like if the focus wasn't on ambient sounds. What if it were built from the ground up as an online workstation? That’s why I built Haus. It includes all the tools Moodist has, along with an ambient sound player featuring all of Moodist’s sounds.

You can open and close tools/apps, move and resize them. Everything is stored locally in your browser, nothing ever leaves it. It's designed with simplicity and privacy in mind, very much similar to Moodist.

You can try the live version at haus.mvze.net or check out the source code on the GitHub repo.

You can also self-host it if you’d like.

Please let me know what you think! If you enjoyed it, I'd greatly appreciate your support, especially if you could give it a star on GitHub.

38
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Gqsmoothster on 2025-07-28 19:32:40+00:00.


Basically, my honey-do list around the homestead is too large to manage with my usual task manager. So I'd like to also put "job postings" up for my kids to be able to do as well. I'd like to be able to post a small chore into a pool, and let them assign themselves to do it, and then get a reward later. I have a used a million tools like Trello, Omnifocus, etc.... but I don't want to get bogged down by logins... this will be local only. It has to be lightweight and fast enough to use as I'm walking to get the mail and notice some weeds need to be pulled around the rose bushes. Or the chicken food is getting low and needs someone to run out and refill. Being able to snap a pic would be ideal as well.

Obviously not a comprehensive list of requirements here... I'm just thinking out loud and wondering if someone has a system in place already.

39
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/anonymous-69 on 2025-07-29 10:42:10+00:00.


Both UK and Australia are imposing age restrictions for websites like Google. Will this affect SearXNG in any way?

40
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/WorldTraveller101 on 2025-07-29 19:06:12+00:00.


Hey self-hosters and book lovers! 👋

Since the last update, BookLore, the self-hosted library manager for PDFs, EPUBs, CBZs, and metadata nerds, has gained major new powers across organization, automation, and usability.

New Highlights:

  • 🔮 Magic Shelves: Create dynamic shelves using smart, rule-based filters, auto-updates as your library changes. 📘 Learn more
  • 📥 Bookdrop: Drop files into a folder, and BookLore handles import, metadata, and notifications automatically. 📘 Guide
  • 🧠 Metadata Review: Review, edit, and approve metadata updates before applying, no more blind overwrites.
  • 📱 Mobile UI Improvements: Refined layouts for phones and tablets for smoother navigation and better accessibility.
  • 🗂️ Smarter File Handling: Move files using metadata-based patterns, with rebuilt file monitoring for accurate detection.
  • 📚 New Documentation Site: BookLore now has an official docs site for setup, features, and guides. 👉 Visit Docs
  • 💖 Now BookLore is on Open Collective: Early funds will go toward a Kobo device for sync support, server costs, and hosting the official website.

Got feedback, questions, or feature ideas?

Jump into the Discord or leave a comment, this community drives BookLore forward.

Happy reading & self-hosting! 📖

Screenshots: https://imgur.com/a/qsY86q2

41
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/VizeKarma on 2025-07-29 13:55:33+00:00.


Repo: https://github.com/LukeGus/Termix

Install Guide: https://docs.termix.site/docs

Hello! Today, I am pleased to announce the release of version 1.0 of Termix, which combines several of my tools into one. Termix is a clientless web-based server management platform with SSH terminal, tunneling, and file editing capabilities.

Features:

  • SSH Terminal Access - Full-featured terminal with split-screen support (up to 4 panels) and tab system
  • SSH Tunnel Management - Create and manage SSH tunnels with automatic reconnection and health monitoring
  • Remote Config Editor - Edit files directly on remote servers with syntax highlighting and file management
  • SSH Host Manager - Save, organize, and manage your SSH connections with tags and folders
  • User Authentication - Secure user management with admin controls
  • Modern UI - Clean interface built with React, Tailwind CSS, and the amazing Shadcn

Thanks for checking it out, and stay tuned for more updates!

42
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/sharipova on 2025-07-29 12:23:42+00:00.


Hey everyone!

Founder of anytype here - i want to share that we delivered on our long-time promise of an API.

TLDR what’s new: 

  • local API (desktop for now) to connect to external services and build your own workflows
  • MCP server that allows to connect to LLMs
  • Also shipped raycast extension as an example
  • Additionally, we improved export/import to markdown - it now supports types and properties, so you can be assured your data is yours forever.

Video:

https://www.youtube.com/watch?v=_IpW-iPtbXw&t=1s

About anytype: a wiki tool to collaborate on docs, databases and files - all local and private. Everything stays on your device—end-to-end encrypted, synced peer-to-peer, with support of collaboration in groups. It’s also possible to self-host for those who can set it up properly. 

Try it: https://download.anytype.io/

More: https://zhanna.any.org/anytype-api-and-mcp (published with anytype)

Just as a reminder how anytype works: 

  • Local-first: all data is stored and encrypted on-device 

  • CRDT-based sync: collaboration with eventual consistency 

  • Accounts & auth via user-owned keys (device-only) 

  • Open source core (part MIT licensed, part source-available): github.com/anyproto

it's also possible to self-host anytype, and we have 800+ self-hosted networks, but it's for experienced self-hosters.

Features:

  • Docs, notes, tasks, tables, media – linked and structured 

  • Real-time collaboration (across users & devices) - 

  • Web publishing (from desktop)

  • Native iOS and android apps (desktop has full experience)

We open the API as the first step to enable anyone to build on top. If you have questions, feedback, ideas, I am all ears.

43
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Big_Stingman on 2025-07-29 08:14:04+00:00.


I have been a paying customer of UptimeRobot for years. I have been paying $8 a month for about 30-35 monitors and it has worked great to monitor all my home lab services. I also use some other features like notifications and status pages. I got an email yesterday that my legacy plan is being "upgraded" (rather - forced migration) and I would need to pay for their new "Team" plan to have the same level of service, for $34. That's a 425% price increase.

They do have a "Solo" plan that would be $19, but that is actually less capable than my current legacy plan for $8. So I would be paying 237.5% more for worse service.

Now I have no problem paying for a service that is providing value, but these price increases are a bit ridiculous. This is for a homelab, not a company.

Anyway, I am looking at alternatives and here's what I came up with so far. If anyone has additional ideas please share!

Uptime Kuma

  • My main question is how and where to deploy this?
  • Another issue is I want to deploy version 2 (even though it's beta) because it has quite a few more features that I want. Version 1 hasn't been updated in 6 months, so I don't want to have to migrate.
  • Right now my plan is to deploy on a digital ocean droplet for $4 (or maybe $6 depending on memory usage). This would require me to also deploy something like Caddy/Traefik/Nginx + certbot.
  • This seems like the cheapest option that allows me to deploy version 2 beta of Uptime Kuma
  • Other deployment options like pikapods don't currently support version 2.

It's unfortunate I have to leave UptimeRobot, but I'm not going to pay $34 for the same service I've been getting for $8. I probably would have been ok paying even $10-12, but this really just left a bad taste in my mouth. What do you guys think?

If anyone has an easier way to deploy Uptime Kuma without having to manage the underlying infrastructure, I'd be very interested in that. I want to deploy the beta though, which seems to not be available for managed services from what I can tell. Also, if there is a comparable service to Uptime Robot that doesn't charge $34, I'd also be interested in that. Thanks all!

44
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/makeshift_gray on 2025-07-28 13:41:02+00:00.


Recently I set up Karakeep to monitor an RSS feed of my reddit saved items, so whenever I save something in reddit, it's imported automatically to Karakeep.

This is a great solution I'd like to implement elsewhere, starting with FreshRSS. I'd like to be able to star something in FreshRSS and have it imported automatically to Karakeep.

My question is, can FreshRSS itself generate an RSS feed of its favorites? Or is there another approach to achieve the same thing?

45
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bates121 on 2025-07-28 18:27:45+00:00.


I have been very jealous of all the post of people getting free stuff, and it finally happened to me. So my father-in-law is a general contractor. He is working for a company that is moving their corp offices and they have a ton of stuff they are just getting rid of. Yesterday he asked if i wanted anything. I said if they have any towers/servers or hard drives i will gladly take them. He just dropped off 6 12 TB seagate ironwolf drives, a box of 10 PCoIP devices (thin clients), a couple of UPS's, an apple keyboard adn mouse, and some random sticks of ram. Now to check them and see if any are useful. https://imgur.com/a/ALUS2Rr

46
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Aretebeliever on 2025-07-28 00:55:46+00:00.


I know it didn't work great for a long time but I have a decent library of books/audiobooks right now and was just curious if anyone had found an alternative to Readarr yet?

47
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/IT-BAER on 2025-07-28 08:12:08+00:00.


Hey r/grafana & r/selfhosted !

Since my last post about the Unbound DNS dashboard a while ago, I've been busy expanding the collection with some pretty cool additions. Thought you'd appreciate the updates!

🆕 What's New:

Glancy Dashboard

This one's my personal "Glance" replacement. It's a comprehensive "at-a-glance" or "Home" Dashboard that aggregates content from:

  • Reddit Posts from specified Subreddits
  • Twitch Channels incl. Thumbnail Preview and Top Games
  • YouTube Feeds from selected Channels
  • GitHub Release from chosen Repositories
  • Custom Bookmarks with Icons
  • Calendar
  • Custom Search Engine

Everythings configureable within the Dashboard at the bottom!

https://preview.redd.it/t5it1yddokff1.jpg?width=1920&format=pjpg&auto=webp&s=c28b48282f77cb813d579c27045ca2d490ae03aa

Glancy-Navbar

A sleek sticky navigation panel that makes dashboard switching buttery smooth. Once you try it, you can't go back to the default Grafana navigation.

https://i.redd.it/ml2ow3ijokff1.gif

Enhanced Unbound DNS Dashboard:

https://preview.redd.it/sqmc9aepokff1.jpg?width=936&format=pjpg&auto=webp&s=c499126beddaeeb01cee15168f9415adef9bafff

GitHub: https://github.com/IT-BAER/grafana

What's Next:

This Repo is constantly growing with my Ideas and personal Usage Dashboards and Panels.

Would love to hear your thoughts or see your own dashboard creations!

Feedback always welcome! ☕

Drop a ⭐ on the repo if you find it useful!

48
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/see_sharp_zeik on 2025-07-28 19:45:04+00:00.


Hey everyone! First time poster in this sub so please go easy on me!

I have been self hosting services for a very very long time... my first "Self-hosted" application was SharePoint 2010. I have slowly been extracting myself from Microsoft stuff and have embraced FOSS. To get some of my services out of my network I started searching around and discovered NGINX Proxy Manager; and it has been great so far.

Recently while searching around about reverse proxy info I discovered Traefik and saw that you could just add labels to your docker containers to configure the reverse proxy and I was floored. It's so easy to setup and add containers to the config and I don't have to go through all my nginx entries and try to remember which ones are still active.

I still had to use NPM to get services externally as my traefik instance is on my docker server and serves those containers internally, so any external requests come in to the NPM server and are forwarded to the right internal URL.

Well, as I was perusing the Traefik docs I discovered that you can also use an http api endpoint to get routing config data from and I can neither confirm nor deny that something happened in my pants when I discovered that.

Over the last couple days I searched for solutions that implemented this and met my needs and I couldn't find any.. so I made one. A small service that reads Traefik labels and it's own configuration through labels and makes it available in a Traefik friendly JSON endpoint.

If it might meet your needs then feel free to check it out, I have published it under the Apachee 2.0 license.

https://github.com/zeiktuvai/ICOM.Docker.Utils

49
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Same_Detective_7433 on 2025-07-28 00:27:47+00:00.


Too many people still seem to think it is hard to get incoming IPv4 through a Starlink. And while yes, it is a pain, with almost ANY VPS($5 and cheaper per month) you can get it, complete, invisible, working with DNS and all that magic.

I will post the directions here, including config examples, so it will seem long, BUT IT IS EASY, and the configs are just normal wg0.conf files you probably already have, but with forwarding rules in there. You can apply these in many different ways, but this is how I like to do it, and it works, and it is secure. (Well, as secure as sharing your crap on the internet is on any given day!)

Only three parts, wg0.conf, firewall setup, and maybe telling your home network to let the packets go somewhere, but probably not even that.

I will assume you know how to setup wireguard, this is not to teach you that. There are many guides, or ask questions here if you need, hopefully someone else or I will answer.

You need wireguard on both ends, installed on the server, and SOMEWHERE in your network, a router, a machine. Your choice. I will address the VPS config to bypass CGNAT here, the internals to your network are the same, but depend on your device.

You will put the endpoint on your home network wireguard config to the OPEN PORT you have on your VPS, and have your network connect to it, it is exactly like any other wireguard setup, but you make sure to specify the endpoint of your VPS on the home wireguard, NOT the opther way around - That is the CGNAT transversal magic right there, that's it. Port forwarding just makes it useful. So you home network connects out, but that establishes a tunnel that works both directions, bypassing the CGNAT.

Firewall rules - YOU NEED to open any ports on the VPS that you want forwarded, otherwise, it cannot receive them to forward them - obvious, right? Also the wireguard port needs to be opened. I will give examples below in the Firewall Section.

You need to enable packet forwarding on the linux VPS, which is done INSIDE the config example below.

You need to choose ports to forwards, and where you forward them to, which is also INSIDE the config example below, for 80, 443, etc....


Here is the config examples - it is ONLY a normal wg0.conf with forwarding rules added, explained below, nothing special, it is less complex that it looks like, just read it.

wg0.conf on VPS

# local settings for the public server
[Interface]
PrivateKey = <Yeah, get your own>
Address = 192.168.15.10
ListenPort = 51820

# packet forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1

# port forwarding
###################
#HomeServer - Note Ethernet IP based incoming routing(Can use a whole adapter)
###################
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 443 -j DNAT --to-destination 192.168.10.20:443
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 443 -j DNAT --to-destination 192.168.10.20:443
#
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 192.168.10.20:80
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 192.168.10.20:80
#
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 10022 -j DNAT --to-destination 192.168.10.20:22
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 10022 -j DNAT --to-destination 192.168.10.20:22
#
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 10023 -j DNAT --to-destination 192.168.50.30:22
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 10023 -j DNAT --to-destination 192.168.50.30:22
#
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 10024 -j DNAT --to-destination 192.168.10.1:22
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 10024 -j DNAT --to-destination 192.168.10.1:22
#
PreUp = iptables -t nat -A PREROUTING -d 200.1.1.1 -p tcp --dport 5443 -j DNAT --to-destination 192.168.10.1:443
PostDown = iptables -t nat -D PREROUTING -d 200.1.1.1 -p tcp --dport 5443 -j DNAT --to-destination 192.168.10.1:443

# packet masquerading
PreUp = iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE

# remote settings for the private server
[Peer]
PublicKey = <Yeah, get your own>
PresharedKey = <Yeah, get your own>
AllowedIPs = 192.168.10.0/24, 192.168.15.0/24

You need to change the IP(in this example 200.1.1.1 to your VPS IP, you can even use more than one if you have more than one)

I explain below what the port forwarding commands do, this config ALSO allows linux to forward packets and masquerade packets, this is needed to have your home network respond properly.

The port forwards are as follows...

443 IN --> 192.168.10.20:443

80 IN --> 192.168.10.20:80

10022 IN --> 192.168.10.20:22

10023 IN --> 192.168.10.30:22

10024 IN --> 192.168.10.1:22

5443 IN --> 192.168.10.1:5443

The line

PreUp = sysctl -w net.ipv4.ip_forward=1

simply allows the linux kernel to forward packets to your network at home,

You STILL NEED to allow forwarding in UFW or whatever firewall you have. This is a different thing. See Firewall below.


FIREWALL

Second, you need to setup your firewall to accept these packets, in this example, 22,80,443,10022,10023,5443

You would use(these are from memory, so may need tweaking)

sudo ufw allow 22

sudo ufw allow 80

sudo ufw allow 443

sudo ufw allow 10022

sudo ufw allow 10023

sudo ufw allow 10024

sudo ufw allow 5443

sudo ufw route allow to 192.168.10.0/24

sudo ufw route allow to 192.168.15.0/24

To get the final firewall setting (for my example setup) of....

sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
51820                      ALLOW IN    Anywhere
80                         ALLOW IN    Anywhere
443                        ALLOW IN    Anywhere
10022                        ALLOW IN    Anywhere
10023                        ALLOW IN    Anywhere
10024                        ALLOW IN    Anywhere
51821                      ALLOW IN    Anywhere
192.168.10.0/24            ALLOW FWD   Anywhere
192.168.15.0/24           ALLOW FWD   Anywhere

FINALLY - Whatever machine you used in your network to access the VPS to make a tunnel NEEDS to be able to see the machines you want to access, this depends on the machine, and the rules setup on it. Routers often have firewalls that need a RULE letting the packets from to the LAN, although if you setup wireguard on an openwrt router, it is (probably) in the lan firewall zone, so should just work. Ironically this makes it harder and needs a rule to access the actual router sometimes. - Other machines will vary, but should probably work by default.(Maybe)


TESTING

Testing access is as simple as pinging or running curl on the VPS to see it is talking to your home network, if you can PING and especially curl your own network like this

curl 192.168.15.1
curl https://192.168.15.1/

or whatever your addresses are from the VPS, it IS WORKING, and any other problems are your firewall or your port forwards.


This has been long and rambling, but absolutely bypasses CGNAT on Starlink, I am currently bypassing three seperate ones like this, and login with my domain, like router.mydomain.com, IPv4 only with almost no added lag, and reliable as heck.

Careful, DO NOT forward port 22 from the VPS if you use it to configure your VPS, as then you will not be able to login to your VPS, because is if forwarded to your home network. It is obvious if you think about it.

Good luck, hope this helps someone.

50
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/SubnetLiz on 2025-07-28 15:26:48+00:00.


Ifeel like I know the “big names” (Nextcloud, Vaultwarden, Jellyfin, etc.), but I keep stumbling across smaller, less talked about tools that end up being game changers

Curious what gems the rest of you are running that don’t get as much love as the big projects. (Or more love for big projects -i dont descriminate if it works 😅) Bonus points if it’s lightweight, Docker-friendly, and not just another media app.

What’s on your can’t live without it list that most people maybe haven’t tried?

view more: ‹ prev next ›