Self-Hosted Alternatives to Popular Services

218 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/LeIdrimi on 2025-08-03 05:25:57+00:00.


Sunday. Garbage phone tests & maybe a working case design. Appstore asstes.

For those who have no idea what i’m talking about : I’m trying to build an open source sonos alternative, mainly software (based on snapcast), currently focusing on hardware (based on pi). I’m summarizing it here: r/beatnikAudio

What i did this week:

A. Had to produce alot of images for app & play store. (Ridiculous)

B. Sent iOS app to review

C. Sent android app to review

D. First version of website almost ready

E. Started adding shell scripts to beatnik pi repo (setup script)

F. Finally the case seems to works out. (Had to construct heavy support for those 4 usb & lan port. )

Apps going to be tested in production. (A so called pro gamer move). If the reviewers let it pass. Let’s hope for next week. (Posted a video yesterday of android garbage phone tests here: https://www.reddit.com/r/beatnikAudio/s/Sa5XkoSlUk)

Hardware: i had to limit the scope of it for now. I’m not allowed to play with rotary encoders and servos anymore. I want to have a working case fast. But i still see knobs and physical buttons as core feature. As it explains the product. (Find some impressions here: https://www.reddit.com/r/beatnikAudio/s/2yM9ODiD4U)

Shell scripts, for those who would like to test, are on a feature branch: https://github.com/byrdsandbytes/beatnik-pi/blob/feature/shell-script/install.sh

Rather boring but relevant, privacy policy. https://github.com/byrdsandbytes/beatnik-controller/blob/master/PRIVACY_POLICY.md (policy is simple: we do not collect, store, or share any of your personal information. All data required for the app to function is stored locally on your device.)

I guess in two weeks (mid august) the project will be visible (website & appstores). Probably should/will take a week off after that.

Thanks for the continuing support. 🎈

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/diobrandiohaxxerxd on 2025-08-03 01:23:12+00:00.


I'm planning on hosting a Bedrock Minecraft server from a registered domain that points to the server running from my computer. But while doing this I realized one thing, anyone can just boot you offline if they have your public IP. I don't really know how to mitigate people from doing this, I'm not comfortable trying VPN routing and that seems like the only way. Can anyone share some insight?

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ElevenNotes on 2025-08-03 01:47:37+00:00.


DISCLAIMER FOR REDDIT USERS ⚠️

  • You'll find the source code for the image on my github repo: 11notes/nginx or at the end of this post
  • You can debug distroless containers. Check my RTFM/distroless for an example on how easily this can be done
  • If you prefer the original image or any other image provider, that is fine, it is your choice and as long as you are happy, I am happy
  • No, I don't plan to make a PR to the original image, because that PR would be huge and require a lot of effort and I have other stuff to attend to than to fix everyones Docker images
  • No AI was used to write this post or to write the code for my images! The README.md is generated by my own github action based on the project.md template, there is no LLM involved, even if you hate emojis
  • If you are offended that I use the default image to compare nginx to mine, rest assured that alpine-slim is still 3.22x larger than my current image 😉. The reason to compare it to the default is simple: Most people will run the default image.

INTRODUCTION 📢

nginx (engine x) is an HTTP web server, reverse proxy, content cache, load balancer, TCP/UDP proxy server, and mail proxy server.

SYNOPSIS 📖

What can I do with this? This image will serve as a base for nginx related images that need a high-performance webserver. The default tag of this image is stripped for most functions that can be used by a reverse proxy in front of nginx, it adds however important webserver functions like brotli compression. The default tag is not meant to run as a reverse proxy, use the full image for that. The default tag does not support HTTPS for instance!

UNIQUE VALUE PROPOSITION 💶

Why should I run this image and not the other image(s) that already exist? Good question! Because ...

  • ... this image runs rootless as 1000:1000
  • ... this image has no shell since it is distroless
  • ... this image is auto updated to the latest version via CI/CD
  • ... this image has a health check
  • ... this image runs read-only
  • ... this image is automatically scanned for CVEs before and after publishing
  • ... this image is created via a secure and pinned CI/CD process
  • ... this image verifies external payloads if possible
  • ... this image is very small

If you value security, simplicity and optimizations to the extreme, then this image might be for you.

COMPARISON 🏁

Below you find a comparison between this image and the most used or original one.

| image | 11notes/nginx:1.28.0 | nginx:1.28.0 | |


|


|


| | image size on disk | 3.69MB | 192MB | | process UID/GID | 1000/1000 | 0/0 | | distroless? | ✅ | ❌ | | rootless? | ✅ | ❌ |

COMPOSE ✂️

name: "nginx"
services:
 nginx:
 image: "11notes/nginx:1.28.0"
 read\_only: true
 environment:
 TZ: "Europe/Zurich"
 ports:
 - "3000:3000/tcp"
 networks:
 frontend:
 volumes:
 - "etc:/nginx/etc"
 - "var:/nginx/var"
 tmpfs:
 - "/nginx/cache:uid=1000,gid=1000"
 - "/nginx/run:uid=1000,gid=1000"
 restart: "always"

volumes:
 etc:
 var:

networks:
 frontend:

SOURCE 💾

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Volcaus on 2025-08-02 20:06:31+00:00.


Hey all, Retrom has had some significant updates recently, and I wanted to share them here! As always, if you are interested in Retrom head to the GitHub for download links and documentation. Please join the Discord as well, if you would like to be a part of the community and/or have questions or troubleshooting needs!

Relevant links:

GitHub

Wiki / Docs

Discord

Screenshots are available on the Github repo

What Is Retrom?

For those who are unaware, Retrom is best described as a unified game library front-end with a focus on emulation. The big difference between Retrom and other game/emulation front-ends is that is comes with a centralized server that owns all library files and associated metadata (covers, screenshots, text descriptions, links etc).

The Retrom server can optionally be run locally alongside the client under the hood for simple use-cases (referred to as Standalone Mode). The server can also be run as a remote, dedicated Retrom server instance. Either server solution allows for any number of Retrom desktop clients to connect and access the same library with essentially zero config/onboarding required for new clients. There is also a Retrom web client exposed by the service that allows for most of the Retrom desktop client's functionality within the browser of any device with access (including mobile devices).

Core Features

  • Centralized Retrom server that owns and distributes your game library, game metadata and configurations to any number of Retrom clients
    • Self-host a dedicated server, or utilize the built-in server locally via Standalone Mode in one of your desktop clients. Other secondary desktop clients can still connect to the Standalone Mode server!
  • Desktop clients natively available for Windows, MacOS, and Linux
  • Web client for access to your Retrom library from any device w/ a web browser
    • Currently only supported via the Retrom server docker image, however native binary distributions of the service for Windows, MacOS and Linux are coming soon which will also provide the web client
  • Download/Search/Manage metadata for your library -- all stored on the server and immediately, identically available to all clients
  • Fullscreen Mode for navigating the client with a controller, great for TV and couch gaming setups
  • Newly Available: Emulate any browser-compatible (WASM) systems directly from the web client or desktop client via EmulatorJS with nearly zero-configuration needed
  • Add/manage local standalone emulators to launch your respective library games with via the desktop client
  • Newly Available: Cloud save management, synchronization and distribution for supported systems for both save files and save states
    • Currently limited to the Retrom-managed, built-in emulators, but support for standalone emulators is planned
  • Manage native Windows, MacOS and Linux games
    • Automated installation and launching of non-portable (installation required) games is still a work-in-progress
  • Third-party library integrations: view, install and launch games from your other third-party libraries
    • Steam
    • GoG (planned)

Whats New

These two big new features in Retrom, previously only available to beta users, are now generally available:

  1. In-browser emulation via EmulatorJS
    1. Supported emulator cores are built-in and ready out-of-the-box for all Retrom clients, web and desktop
  2. Cloud save file and save state management
    1. Sync saves for built-in emulators to your Retrom server and resume your game from another device/client at any time
5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ok-Warthog2065 on 2025-08-02 22:59:33+00:00.


Nothing seems to be anywhere near as efficient on battery life, and things like traccar seem to be picky to set up,fighting the phones permissions for ever (I have a samsung), and basically bad to use. Is there something out there that has slipped past me, or am I using google maps for the foreseeable future?

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Haliphone on 2025-08-02 12:58:10+00:00.


Hello there,

After a year and a half of putting it off I'd like to take my pictures out of Google and I think immich is my choice.

That's all grand, but is there anyway I can easily grab the metadata from Google photos so everything will be easier to sort or am I destined to hand edit everything?

If you've made the move before - any tips, tricks or gotcha's that will make my life easier are most welcome.

Thanks in advance!

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/xudexi on 2025-08-02 13:58:15+00:00.


Hey everyone, I wanted to share RetroAssembly, the project I've been working on to you.

What is RetroAssembly?

It's a web-based personal game cabinet that lets you organize and play classic console games directly in your browser. Upload your ROMs once, play anywhere on any device with a web browser.

Key Features:

  • Supports NES, SNES, Genesis, GameBoy, Arcade, and more
  • Auto-detects and displays beautiful box art for your games
  • Save and sync your progress, resume anytime
  • Navigate your library with keyboard or gamepad (spatial navigation)
  • Built-in retro-style shaders
  • On-screen virtual controller for mobile play

Getting Started

Docker deployment is dead simple:

sh docker run -d --name retroassembly -p 8000:8000 -v /path/to/your/data:/app/data arianrhodsandlot/retroassembly

There's also a hosted version at retroassembly.com if you just want to try it out, but having your own instance means complete control over your retro gaming collection.

Links

Anyone been looking for a good self-hosted retro gaming solution? Would love to hear your thoughts!

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/odaman8213 on 2025-08-02 13:49:06+00:00.


Now that Mattermost and RocketChat are transitioning into government first paid source available packages, with more and more features being put behind a paywall, the question comes up:

Aside from RocketChat and Mattermost - what are the best self hosted open source (like really open source, not open source as a marketing ploy) chat and colab tooling?

I know Matrix is a big one, but it seems like that can get hard to use for non-technical users - are there any others? Or is Matrix the only fully open source alternative?

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/the_prez3 on 2025-08-02 11:35:45+00:00.


What is your favorite self hosted budgeting/money management software?

10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/JDMhammer on 2025-08-02 12:19:03+00:00.


Maintainer of Speedtest Tracker here...

Like it says on the tin I'm starting to think about what the next iteration of Speedtest Tracker looks like. If you have any ideas feel free to drop them in the GitHub discussion linked below, I'm pretty bad at checking Reddit comments 🤷‍♂️.

https://github.com/alexjustesen/speedtest-tracker/discussions/2304

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MrRunningMan on 2025-08-02 08:07:33+00:00.


Sup, self hosting is great, and I'm looking for more to host at home, but how many have apps created for them?

Wwe use our phones so much and apps to go with the self hosted applications make it easier.

What do you use that has an app ?

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kieran_who on 2025-08-01 23:12:48+00:00.


Hey r/selfhosted,

About a month ago I shared SapienAI here. SapienAI is a self-hosted academic chatbot and research workspace plus editor. The feedback I received was great, and the two most desired features were support for local LLMs and LaTeX. Both of which have been introduced in the latest release.

More about SpaienAI for those not familiar:

SapienAI provides an AI chatbot that lets you switch between models from OpenAI, Google, Anthropic and now models running locally with Ollama.

SapienAI also provides a research workspace where you can upload documents to have AI analyse and summarise them. All uploaded documents are also semantically searchable.

Within research spaces, there is an editor that lets you write with as much or as little AI support as you like, with first-class support for Markdown, Typst, and now LaTex, meaning you can write in these formats and see live previews of the documents and download the final outputs.

I've always wanted to make this app run entirely locally. I don't collect any telemetry or anything like that, and now with Ollama support, you can run it without having to use any external APIs at all.

I'd love to hear feedback on bugs as well as next features. What I have planned next is migrating to a relational DB (currently using Weaviate as the standalone DB, it has worked surprisingly well a but lack of atomicity and isolation has become a bit unwieldy as potential conflicts have required implementing my own locking). The code will also be published once I've given it the Github glowup and settled on a licensing approach.

Check it out here: https://github.com/Academic-ID/sapienAI

For anyone already using SapienAI, the new release notes are here, which detail some important changes for upgrading: https://github.com/Academic-ID/sapienAI/releases/tag/v0.3.0

Cheers!

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/master_overthinker on 2025-08-01 20:15:27+00:00.


I had gotten Pihole to work at home but it always start disconnecting after a while.

I had gotten reverse proxy to work one time by accident, for like a day, and then it didn't work again.

This week, I finally pulled the trigger and got a vps online. I used Jim's Garage's Ultimate Torrent VPS setup: https://github.com/JamesTurland/JimsGarage/blob/main/UltimateVPS/docker-compose-VPS.yaml , had to change some settings but got it up and running pretty easily. Now my home is using Pihole on the vps through Wireguard, the apps on the server all get FQDN reverse proxied only reachable through Wireguard. I'm happy.

(If you want the video it's here: https://www.youtube.com/watch?v=GPouykKLqbE)

Next step, I wonder if this Traefik reverse proxy can also point FQDNs to my home hosted apps too so I can access them just like the one hosted on the vps? Or am I not thinking about this right? Should I install the same Traefik container at home instead? I'm not sure what's the best way to do that.

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Flaminel on 2025-08-01 17:32:14+00:00.


Hey everyone and happy weekend yet again!

Back at it again with some updates for Cleanuparr that's now reached v2.1.0.

Recap - What is Cleanuparr?

(just gonna copy-paste this from last time really)

If you're running Sonarr/Radarr/Lidarr/Readarr/Whisparr with a torrent client, you've probably dealt with the pain of downloads that just... sit there. Stalled torrents, failed imports, stuff that downloads but never gets picked up by the arrs, maybe downloads with no hardlinks and more recently, malware downloads.

Cleanuparr basically acts like a smart janitor for your setup. It watches your download queue and automatically removes the trash that's not working, then tells your arrs to search for replacements. Set it up once and forget about it.

Works with:

  • Arrs: Sonarr, Radarr, Lidarr, Readarr, Whisparr
  • Download clients: qBittorrent, Deluge, Transmission, µTorrent

While failed imports can also be handled for Usenet users (failed import detection does not need a download client to be configured), Cleanuparr is mostly aimed towards Torrent users for now (Usenet support is being considered).

A full list of features is available here.

Changes since v2.0.0:

  • Added an option to remove known malware detection, based on this list. If you encounter malware torrents that are not being caught by the current patterns, please bring them to my attention so we can work together to improve the detection and keep everyone's setups safer!
  • Added blocklists to Cloudflare Pages to provide faster updates (as low as 5 min between blocklist reloading). New blocklist urls and docs are available here.
  • Added health check endpoint to use for Docker & Kubernetes.
  • Added Readarr support.
  • Added Whisparr support.
  • Added µTorrent support.
  • Added Progressive Web App support (can be installed on phones as PWA).
  • Improved download removal to be separate from replacement search to ensure malware is deleted as fast as possible.
  • Small bug fixes and improvements.
  • And more small stuff (all changes available here).

Want to try it?

Grab it from: https://github.com/Cleanuparr/Cleanuparr

Docs are available at: https://cleanuparr.github.io/Cleanuparr

There's already a fair share of feature requests in the pipeline, but I'm always looking to improve Cleanuparr, so don't hesitate to let me know how! I'll get to all of them, slowly but surely.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/geekyvibes on 2025-08-01 17:47:45+00:00.


Tl;dr what do you all use to keep Docker stacks updated.

I self-host a bunch of stuff. Been doing it on and off just shy of 25ish years... re: updates, started with shell scripts. These days it's all Ansible and Pushover for notifications and alerts. All straightforward stuff.

Buuuut, (in his best Professor Farnsworth voice) welcome to the world of tomorrow... Containers, specifically Docker Stacks... How do you keep on top of that.

For example, I use "what's up docker" to get weekly alerts about updates. Ansible play to stop the stack, pull, build... Prune. This mostly works with Docker as standalone server thingy on Synology and minis (in LXC), so it's not a swarm. To update, I keep an inventory of paths to compose files in Ansible host vars.

Exceptions, e.g. Authentik - I still get alerts, but they release new compose files and I need to manage them manually, because I have custom bits in the compose file itself (so replacing the file is not an option).

At this stage, workflow is: Get notification. Manually run a play. Done. (Could auto run, but I want to be around in case things go wrong).

Caveat for more info...

  • I've given up on Portainer. It's fantastic when I want to test something quicky, but for me personally it's a lot easier to just have subdirs with compose files and bind dirs when required.
  • I do use Dockge for quick lookps.
  • Docker servers are standalone (one on NAS, Synology, whatever it uses); and one in LXC container.

I'd like to hear some ideas about keeping on top of Docker image/compose updates. Maybe something you do that is more efficient, faster, better management, more automation? I don't know, but I feel like I could get it a little more automated and would love to know what everyone is doing about this.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/hedonihilistic on 2025-08-01 09:21:48+00:00.


Hey r/selfhosted,

I wanted to share a project I've been working on called MAESTRO. It's an AI-powered research platform that you can run entirely on your own hardware.

The idea was to create a tool that could manage the entire research process. Based on your questions, it can go look for relevant documents from your collection or the internet, make notes, and then create a research report based on that. All of the notes and the final research report are available for your perusal. It's designed for anyone who needs to synthesize information from dense documents, like academic papers, technical manuals, or legal texts.

A big focus for me was making sure it could be fully self-hosted. It's built to work with local LLMs through any OpenAI-compatible API. For web searches, it now also supports SearXNG, so you can keep your queries private and your entire workflow off the cloud. It may still be a little buggy, so I'd appreciate any feedback.

It's a multi-user system with a chat-based interface where you can interact with the AI, your documents, and the web. The whole thing runs in Docker, with a FastAPI backend and a React frontend.

You can find it on GitHub: LINK

I'd love to hear what you think and get your feedback.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/brufdev on 2025-08-01 18:44:09+00:00.


Many Notes is a Markdown note-taking web application designed for simplicity! It uses a database to power its features, but your files are also saved in the filesystem, giving you full control over your vault structure and ensuring easy access and portability.

Hi guys!

I'm back with a new version of Many Notes (v0.11), including a few new features.

Local authentication can now be disabled and use only one OAuth provider for authentication. For those that prefer this method, you can now configure Many Notes to automatically redirect you to you provider's login and logout page.

When local authentication is disabled, your name and email are automatically updated with the data from your OAuth provider in every login.

This version introduces the concept of user roles and the first registered user is now an admin. This opens the door for other future features but for now, there's a new main menu option to control a few app settings from the frontend.

There's now an automatic update check that monitors GitHub releases and notifies you when a new version is available. You can disable this from the frontend.

Enabling or disabling registration was moved from the compose.yaml file to the frontend, so now there's no need to restart the container to change this.

As always, I try my best to keep Many Notes simple to run and easy to use. I also focus on providing non-disruptive updates, but that doesn't eliminate the need for backups, so be sure to back up your data, especially before updates. You can find the full changelog for this update here: https://github.com/brufdev/many-notes/releases/tag/v0.11.0

Here are a few things to keep in mind:

  • Many Notes is under ongoing development.
  • This app is currently in beta, so please be aware that you may encounter some issues.
  • If you find bugs or need assistance, please open an issue on GitHub.
  • For suggestions, please use GitHub discussions.
  • If you like the application, consider giving a star on GitHub.
  • If you'd like to support my work, check the sponsor links on GitHub.

https://github.com/brufdev/many-notes

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/shol-ly on 2025-08-01 12:04:34+00:00.


Happy Friday, r/selfhosted! Linked below is the latest edition of Self-Host Weekly, a weekly newsletter recap of the latest activity in self-hosted software and content (shared directly with this subreddit the first Friday of each month).

This week's features include:

  • Proton's new open-source authentication app
  • Software updates and launches (a ton of great updates this week!)
  • A spotlight on Tracktor -- a vehicle maintenance application (u/bare_coin)
  • Other guides, videos, and content from the community

Thanks, and as usual, feel free to reach out with feedback!


Self-Host Weekly (1 August 2025)

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/robert_teonite on 2025-07-31 20:31:06+00:00.


I have a failover setup for my main internet provider using an LTE modem. I've also found it very useful to have a mobile number available for sending SMS alerts when server errors occur, such as:

  • no internet connection — sending status updates about the failover
  • sending messages about modem restarts
  • and so on...

I've searched in many places for such a tool but couldn't find one, so I wrote a simple tool to handle SMS messages using ModemManager.

Here it is: https://github.com/teon/sms

If anyone needs the LTE/ModemManager failover solution, drop a comment.

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/WasIstHierLos_ on 2025-08-01 11:05:00+00:00.


Your dream all-in-one, digital library management solution

MAJOR UPDATE! 🚨

TLDR: CWA now has full KoSync support, supports Calibre Plugins, is integrated with Hardcover for Progress syncing & Metadata Fetching, Split-Libraries are now supported, now ships with the latest Calibre releases while maintaining compatability for devices running older Kernels, major improvements to metadata fetching process and much much more!

Link to GitHub Project Page

"Calibre-Web Automated is extremely lucky and privileged to have such a large and vibrant community of people who support, enjoy and contribute to the project. The bulk of the new features and bugfixes this update brings were created by the best and brightest of our community and I want to celebrate that and their work here in the hope that our community only continues to grow!" - CrocodileStick

Release V3.1.0 Changelog

Major Changes 🚀

NEW: Split Library Support 💞

  • As promised, all CWA features are now fully compatible with Calibre-Web's Split Library Functionality
  • This enables users to store their Calibre Library in a a separate location to their metadata.db file
  • To configure this, in the Admin Panel, navigate to Edit Calibre Database Configuration -> Separate Book Files from Library
    • The use of Network Shares (especially NFS) with this functionality is discouraged as they sometimes don't play well with CW & CWA's SQLite3 heavy stack. Many users use network shares without issues but there aren't enough resources to support those who can't get it working on their own

NEW: Hardcover API Integration 💜📖

  • Hardcover is now officially not only available as a Metadata Provider, but using Hardcover's API, Kobo Shelves & Read Progress can now also be synced to a user's Hardcover account!
  • Current workflow is scraping a book by title, you can then use the resulting hardcover-id identifier to search for editions of that book, by searching "hardcover-id:". Edition results are filtered to exclude Audiobooks editions, and sorted by ebook then physical book.
  • If a shelf in CWA is selected for Kobo sync, when a book with id and edition identifiers is added to the shelf, it will also be added to Hardcovers want to read list. As the book is read on the Kobo device progress is synced to Hardcover as well when pushed to CWA.
  • To use Hardcover as a Metadata Provider, simply provided a Hardcover API Token in your docker-compose under the HARDCOVER_TOKEN Environment Variable
    • To enable Kobo sync, a Hardcover API Token must be provided for each user in each user's respective Profile Page
  • Thanks to demitrix! <3

NEW: Greatly Improved Metadata Selection UI 🎨

  • Demitrix was really on a roll the last couple of months and also contributed some really cool functionality to the Metadata Selection UI

Link to comparison image (reddit is only allowing one picture per post :/)

  • Much more Elegant & Readable UI, both on Mobile & on Desktop
    • Improved CSS for the Fetch Metadata interface—making it easier and clearer for you to review and select metadata sources.
  • Individually Selectable Elements
    • Say goodbye to having to having all of your book's metadata overwritten simply becuasse you wanted a better looking cover!
    • As of V3.1.0, all metadata elements can be individually updated from multiple sources instead of the only option being to take everything for a single source!
  • Visual Quality Comparison Between the Cover Your Book Already Those Available from Metadata Providers
    • Looking for a specific cover but not sure if the image file is low quality or not? As of V3.1.0, the resolution of cover images is now displayed on the bottom right corner of the preview, the background of which is colour-coded to indicate whether the available cover is of greater, lower or equal quality to the one already attached to the ebook!
  • Thanks to demitrix for their contributions to this! <3

NEW: KoReader Sync Functionality! 📚🗘

  • CWA now includes built-in KOReader syncing functionality, providing a modern alternative to traditional KOReader sync servers!
  • Universal KOReader Syncer: Works across all KOReader-compatible devices, storing sync data in a readable format for future CWA features
  • Modern Authentication: Uses RFC 7617 compliant header-based authentication instead of legacy MD5 hashing for enhanced security
  • CWA Integration: Leverages your existing CWA user accounts and permissions - no additional server setup required
  • Easy Installation: Plugin and setup instructions are available directly from your CWA instance at /kosync
  • Provided by sirwolfgang! <3

NEW: Support for the Latest Versions of Calibre, even on devices with older Kernels! 🆕🎉

  • ABI tag from the extracted libQt6* files removed to allow them to be used with older kernels
  • Adds binutils to install strip for calibre-included Dockerfile. strip libQt6*.so files of the ABI tag so that they can work with older kernels (harmless for newer kernels). These libraries appear to still contain fallbacks for any missing syscalls that calibre might use. add .gitattributes to enforce LF checkout on .sh files (useful for those who build on windows)
  • Thanks to these changes, CWA now has much greater compatibility with a much wider range of devices & is able to keep up to date with the latest Calibre Releases! 🎉
  • Provided by FennyFatal <3

NEW: Calibre Plugin Support (WIP) 🔌

  • Users can now install Calibre plugins such as DeDRM
  • The feature is still a work in progress but users with existing Calibre instances can simply bind their existing Calibre plugins folder to /config/.config/calibre/plugins in their docker-compose file

NEW: Bulk Add Books to Shelves 📚📚📚

Contributed by netvyper, you can now select multiple books from the book list page and add them to a shelf in one go!

  • New "Add to Shelf" button in bulk actions on the book list.
  • Modal dialog lets you pick your shelf.
  • Backend checks for permissions, duplicates, and provides clear success/error feedback.

NEW: Better Docs Cometh - The Birth of the CWA Wiki 📜

  • The documentation for CWA while for many enough, could really be better in helping as many users find the answers and information they need as quickly as possible
  • Therefore We have started work on the CWA Wiki to strive towards this goal!
  • While still very much a work in progress, submissions for pages, edits ect. are open to the community so if you stumble across something that seems wrong, missing or outdated, please jump in and change it if you can or let us know if you're not sure :)

Minor Changes ✨

  • The Ingest Automerge Parameter is now configurable in the CWA Settings Panel (thanks to have-a-boy! PR #417)

    • Users now have the option of selecting their preferred automerge behaviour from the 3 available:
    • new_record (Default) - Create a duplicate record, keeping both copies
    • overwrite - Overwrite library copy with newly imported file
    • ignore - Discard duplicate import, keep library copy
    • The next update will do a lot more to try and squash dupe issues once and for all but for now this solution should help a lot of people configure CWA to do what they need
  • Links to IBDb enties from books now added to ebook identifiers when enabled thanks to chad3814! PR #422

  • Using a QR Code with the Magic-Link login page functionality is now possible thanks to coissac! PR #408

  • Tweaked refresh-library notification messages to be more visually appealing

  • List of Metadata Providers on Fetch Metadata screen is now alphabetized

  • Improvements to the CWA Ingest Processor:

    • The scope of the functions responsible for deleting empty directories during the ingest process has been narrowed to make sure files to be ingested in parent folders are more reliably ingested (thanks to demitrix)!
  • User Profile Pictures can now be changed from the admin panel (thanks to angelicadvocate)!

  • Cover images are now lazy loaded to improve responsiveness & performance on instances with many, many books

  • CSS for Dark Mode users vastly improved across the board!

    • The book cover display on the homepage is now center...

Content cut off. Read original on https://old.reddit.com/r/selfhosted/comments/1metix3/calibreweb_automated_v310_released_the_community/

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Trysupe on 2025-07-31 08:39:03+00:00.


Disclaimer: This post will only be of interest to those using the self-hosted recipe manager Mealie and the Bring shopping list app.

Mealie supports adding the ingredients of a recipe to a Bring shopping list. However, this requires the Mealie instance to be publicly accessible on the internet. As I only want my self-hosted services to be accessible via a VPN and on my local LAN, this was not an option for me.

So I built Mealie-Bring-API, a small local web server that acts as a bridge between Mealie and Bring. If this sounds interesting, take a look at the README in the GitHub repository.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/teqqyde on 2025-07-31 10:15:50+00:00.


Hello,

just curious: Is someone running their own peertube instance? If so some questions:

  • How big is your instance and how much views do you have per day?
  • How does the hardware looks like with your current load?
  • Whats your lessons learned?

I have a youtube channel which is not active some some month but i like to restart and uploading my videos also on my own peertube instance. There are very less information about instances in blogs out there to get some information.

The official docu is quite good regarding the requirements but i like to have some opinions from uers.

Thanks.

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ill-Engineering7895 on 2025-08-01 01:42:18+00:00.


Hello everyone,

Thought I'd share a tool I've been working on to be able to stream content from Usenet and build an infinite plex library.

It's essentially a webdav server that can mount and stream content from Nzb files. It also exposes a SABnzbd api so it can integrate with radarr and sonarr.

I built it because my tiny VPS was easily running out of storage, but now my library takes no storage at all. Hope you like it!

Key Features

  • 📁 WebDAV Server - Provides a WebDAV server for seamless integration.
  • ☁️ Mount NZB Documents - Mount and browse NZB documents as a virtual file system without downloading.
  • 📽️ Full Streaming and Seeking Abilities - Jump ahead to any point in your video streams.
  • 🗃️ Automatic Unrar - View, stream, and seek content within RAR archives
  • 🧩 SABnzbd-Compatible API - Integrate with Sonarr/Radarr and other tools using a compatible API.

Here's the Github link:

Fully open source, of course

https://github.com/nzbdav-dev/nzbdav

There may still be some rough edges, but I'd say its in a usable state. The biggest features left to implement are:

  • Better real-time UI for the Queue and History
  • Automated repairs for when articles become unavailable long after import from radarr/sonarr
24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/decduck on 2025-07-31 23:19:58+00:00.


G'day r/selfhosted

I'm one of the core maintainers of Drop, the self-hosted Steam platform. It's our aim to replicate all the features of Steam for a self-hosted and FOSS application.

We just released v0.3.0, which brings a bunch of new improvements. But since most of you will hear about Drop for the first time, here's what it can do:

  • Host your own game library and share it with multiple people (through SSO if you want!). Each user has their own collections of games they can pick from your libraries.
  • Mix and match your libraries through our 'library sources'. We support both our fancy format (with versioning) or a flat structure (without versioning). You can have more than one, and they all merge.
  • Import metadata about your game library through multiple providers (currently GiantBomb, IGDB, and PCGamingWiki).
  • Native Windows, macOS, and Linux desktop clients (both x64 and aarch64)
  • Docker images for both x86 and aarch64

To give it a whirl, check out our docs: https://docs.droposs.org/docs/guides/quickstart

Our other links:

Reddit isn't letting me upload images for some reason, but screenshots are available on our website: https://droposs.org/

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/jsiwks on 2025-07-31 18:21:59+00:00.


TL;DR: Pangolin Clients (nicknamed "Olm") are a CLI-based peer-to-peer or relay VPN solution built on a hub-and-spoke model, using Newt as the hub for secure connectivity without opening ports.

We developed Pangolin clients. They’re a simple way to use Newt as a VPN jump point into your networks. We decided to release a basic version to the community to see if it’s something others find useful. If it is, we’ll continue to refine and expand it! If not, that’s fine too. Our focus remains on making Pangolin the best self-hosted remote access tool available.

So, what are Pangolin Clients? They’re a lightweight, VPN solution built on a hub-and-spoke model. Unlike mesh-based systems like Tailscale or NetBird, your Newt site acts as the hub, and the clients are the spokes. Just like how Newt provides browser based connectivity without opening ports, this provides VPN capabilities without opening ports. Right now, the clients are minimal, CLI-only for macOS, Windows, and Linux. They’re yet not tied to users; instead, you define a client much like you define a site in Pangolin with secret credentials. 

You can grant a client access to one or more sites (enabled with a --accept-clients in Newt) and control which resources it can address or allow it to access everything on the network. Data relays through Gerbil on your VPS, but using --holepunch you can have them attempt NAT hole-punching for direct connections.

Why should I use this instead of Tailscale? 

You probably shouldn't! If Tailscale works for you then use it! It has a much nicer client and is probably just better. If all you are doing is using it to manage your server - maybe give clients a try!

This feature is still in its early stages, but it opens up some interesting use cases: connecting multiple networks (e.g., home, office, or cloud VPCs), using Newt as a jump box for SSH remote management or other remote access, or creating a lightweight VPN alternative for secure connectivity. We’re excited to see how the community uses it and will continue to build on this foundation if it proves valuable. Let us know your thoughts!

You can try clients right now by updating to 1.8.0! Make sure to follow the update guide because you have to update all of the components.

https://preview.redd.it/neek6li649gf1.png?width=1200&format=png&auto=webp&s=e92ede421fb8bd61b7508612df75f2b584f93614

view more: next ›