Self-Hosted Alternatives to Popular Services

222 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
351
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/OwlCaribou on 2025-07-09 19:42:23+00:00.


Hi r/selfhosted — I’ve built a simple python program ( https://github.com/OwlCaribou/swurApp ) to make sure episodes aren't grabbed until they've aired. This will help prevent things like malicious or fake files being downloaded before the episode is actually out.

It works by connecting to your Sonarr instance’s API and unmonitoring episodes that haven’t aired yet. Then, when the episodes air, swurApp will monitor them again and they should be picked up by Sonarr the next time it grabs episodes.

There’s a little bit of setup (you have to get Sonarr’s API key, and you have to tag the shows you don't want to track), but I’ve tried my best to detail the steps in the README file. Python is not my native language (I’m a Java dev by trade), so suggestions, feedback, and code contributions are welcome.

I know this issue has been plaguing some Sonarr users for a while, so I hope this makes a dent in solving the “why do I have Alien Romulus instead of xyz” problem.

(The stupid acronym stands for “Sonarr Wait Until Release” App[lication].)

Edit: This is a workaround for: https://github.com/Sonarr/Sonarr/issues/969 You CAN make Sonarr wait before grabbing a file, but it does not check if that file is actually within a valid timespan. It only checks for the age of the file itself. So last week someone seeded Alien Romulus as a bunch of TV series, and since it was seeded for several hours, Sonarr instances grabbed the file, even though the episodes hadn't aired.

Check out this thread for an example of why this issue isn't solved with the existing Sonarr settings: https://www.reddit.com/r/sonarr/comments/1lqxfuj/sonarr_grabbing_episodes_before_air_date/

352
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/tissla-xyz on 2025-07-09 16:43:27+00:00.


Hey guys!

I've posted here before so I'm sorry if this is considered spam.

Opforjellyfin, or One Pace for Jellyfin, is a small CLI program meant for downloading One Pace-episodes and placing them in a folder together with proper metadata.

This combines both aquiring the episodes and sorting them in their proper arcs in a neat little package, tailored for Jellyfin use.

I've made some significant improvements to the program during the last few weeks and I believe it is mature for its first 'official' release!

Hence, there are now single-file binaries for Linux, MacOS, and Windows. No need to build from source!

I'm pretty happy with where the program is right now, but I will still ofcourse accept any criticisms or feature requests!

I will also happily accept any contribution toward the metadata repo! Be it either episode .nfo files or suggestions on backdrop images!

See you on the Grand Line!

353
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/SuedeBandit on 2025-07-09 14:09:20+00:00.


I need to archive 10PB of scientific data. Aerospace stuff. Anyone here have any thoughts on managing this kind of scale? Notes below:

  • Format is just generic blob or file
  • Ideally not tape or disc drives
  • Archive/Cold tier, but will get accessed occasionally
  • Need a way to backup or RAID

So far I'm coming back with a $150k budget requirement to purchase a boatload of 20TB storage drives, and that's before backup/RAID. Cloud cost is something like $15k/mo, so it's commensurate. Seems to me there's got to be a better way to do this.

Any crazy ideas?

** Edit **

Appreciate all the responses already. Just to clarify, there will be professional advisors involved and I'm not betting the farm off of a Reddit thread. I'm just curious if anyone here has crazy ideas that the pros might not have top of mind, or if nothing else maybe someone has a cool annecdote to share that make for a neat thread.

354
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/steveiliop56 on 2025-07-09 16:15:05+00:00.


Hello everyone,

I just released Tinyauth v3.5.0 which finally includes LDAP support. This means that you can now use something like LLDAP (just discovered it and it is AMAZING) to centralize your user management instead of having to rely on environment variables or a users file. It may not seem like a significant update but I am letting you know about it because I have gotten a lot of requests for this specific feature in my previous posts and in GitHub issues.

You may or may not know what Tinyauth is but if you don't, it's a lightweight authentication middleware (like Authelia/Authentik/Keycloak) that allows you to easily login to your apps using simple username and password authentication, OAuth with Google, GitHub or any OAuth provider, TOTP and now...LDAP. It requires minimal configuration and can be deployed in less than 5 minutes. It supports all popular proxies like Traefik, Nginx and Caddy.

Check out the new release over on GitHub.

Have fun!

Edit(s): Fix some typos

355
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/aRedditor800 on 2025-07-09 14:34:17+00:00.

356
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/riottto on 2025-07-09 13:20:29+00:00.


Apparently the cat I'm catsitting in my house has taken to sleeping on my old desktop which serves as my Truenas server and accidentally turning it off, thus interrupting my movie night. She has been forgiven though on account of her cuteness. I did not prepare for this in building my homeserver in the last few weeks.

357
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/germandz on 2025-07-09 11:20:31+00:00.

358
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Dim_Kat on 2025-07-09 10:34:33+00:00.

359
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/p211 on 2025-07-09 09:30:46+00:00.


Hey everyone,

I’d like to share a tool I developed for my personal use because I couldn’t find any open source solution that lets me centrally archive and backup my IMAP mailboxes and, importantly, search across all of them at once.

What does Mail-Archiver do?

It automatically archives incoming and outgoing emails from multiple IMAP accounts into a local PostgreSQL database. This allows me to:

  • Store emails and attachments,
  • Search across all archived mailboxes with filters like date range, sender, recipient, and more,
  • Export individual emails (EML) or bulk export
  • Restore selected emails or entire mailboxes back to a target mailbox if needed.

This helps me keep my inboxes clean while having full offline access to all my emails without relying on any provider. There’s also a handy dashboard with statistics and storage monitoring.

Dashboard

Archive

Details

Why am I sharing this?

I found there’s a real lack of solid turnkey selfhosted solutions for centralized mail archiving with search capabilities. So if you’re juggling multiple IMAP accounts and you are looking for a way to back up and search your emails in one place, this might be useful to you.

📦 GitHub repo: https://github.com/s1t5/mail-archiver

Contributions, feedback, or feature requests are very welcome!

360
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/charlino5 on 2025-07-09 03:37:46+00:00.


Im curious what TLDs people decide on for their domains and why. So many choices at varying costs.

EDIT: I’m leaning toward .me. Some decent 1st year promos but the renewal seems a little high. The cheapest renewal I’ve found so far is 17-18.

EDIT 2: I chose this subreddit over r/Domains because I wanted perspective from self hosters.

361
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/JcorpTech on 2025-07-09 03:14:19+00:00.

362
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FerretLess6797 on 2025-07-09 03:10:17+00:00.

363
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/CrispyBegs on 2025-07-08 23:21:48+00:00.


if you don't know, OliveTin is a UI for executing shell commands with button presses and (although I'm still learning it) it's really great.

https://preview.redd.it/a6wvm8s1fqbf1.png?width=2354&format=png&auto=webp&s=34b783a99e5813a343163d1685f70f094b627766

e.g. I have two Pi-Hole instances and from time to time I want to disable ad blocking and it was a bit of a faff to disable both of them. But you can see from my screenshot there I have two buttons that disable pi-hole (for 5 / 10 / 15 mins) or enable them again with a click. That's great and much more convenient, but you still have to load up the OliveTin UI and click the buttons etc and I was wondering if I could do it more easily from my phone.

Enter Macrodroid (android device automation app). I was messing around with this and only just realised you can create quick tiles, and you can use OliveTin's API to trigger actions from a third party service, like Macrodroid. You create the macro that executes an action in OliveTin, and trigger it using a quick tile (or voice command, or nfc tag, or shortcut or geofence or whatever other trigger you want to use). So as you can see here, I can now disable two pi-hole instance for 5 mins with a quick press on my phone's quick tiles. Or restart my calibre container (which i have to do now and again because we live in hell)

https://preview.redd.it/olpkvwfyfqbf1.jpg?width=921&format=pjpg&auto=webp&s=a5cdfeb2e3f015a813f8649674e9726f01aedd54

This is fantastic, but i had a search and no one ever seems to have mentioned it? Is it something really obvious that everyone's already doing.. and it's so mundane that it's not even worth mentioning? Why have a web UI and button presses to execute commands when you could restart your jellyfin container by tapping your phone on an NFC tag stuck to the fridge or whatever.

If I am late to this, I feel really dumb tbh. You could have told me earlier.

364
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Eravex on 2025-07-08 17:47:05+00:00.


https://preview.redd.it/w5lz0rhksobf1.png?width=273&format=png&auto=webp&s=934c2e7e71318527ca78e3a0f25411656eaf6013

One cert manager to rule them all, one CA to find them, one browser to bring them all, and in encryption bind them.

So after a month of tapping away at the keys, I’m finally ready to show the world SphereSSL(again).

Last month I released the Console test for anyone that would find it useful while I build the main version.

The console app was not met with the a warm welcome a free tool should have received. However undiscouraged I am here to announce SphereSSL v1.0, packed with all the same features you expect from ACME with a responsive simple to use UI, no limits or paywalls. Just Certs now, certs tomorrow and auto certs in 60 days.

This isn’t some VC-funded SaaS trap. It’s a 100% free, open-source (BSL 1.1 for now) SSL certificate manager and automation platform that I built for actual humans—whether you’re running a home lab, a small business, or just sick of paying for something that should’ve been easy and free in the first place.

What it does

  • Automates SSL certificate creation and renewal with Let’s Encrypt and other ACME providers (supporting 14 DNS APIs out of the box).
  • Works locally or for public domains—DNS-01, HTTP-01, manual, even self-signed.
  • Handles multi-domain SAN certs, including assigning different DNS providers for each domain if you want.
  • Cross-platform: Native Windows tray app now, Linux tray version in the works (the backend runs anywhere ASP.NET Core does).
  • Convert and export certs: PEM, PFX, CRT, KEY, whatever. Drag-and-drop, convert, export—done.

Why?

Because every “free” or “simple” SSL tool I tried either:

  • Spammed you with ads, upcharges, or required a million steps,
  • Broke on anything except the exact scenario they were built for,
  • Or just assumed you’d be fine running random scripts as root.

I wanted something I could actually trust to automate certs for all my random servers and dev projects—without vendor lock-in, paywalls, or giving my DNS keys to a third party.

What’s different?

  • You control your keys and DNS. The app runs on your machine, and you can add your own API credentials.
  • Modern, functional UI. (Not a terminal app, not another inscrutable config file—just a web dashboard and a tray icon.)
  • Not a half-baked script: Full renewal automation, error handling, status dashboard, API key management, cert status tracking, and detailed logs.
  • Source code is public. All of it: https://github.com/SphereNetwork/SphereSSL

Dashboard:

SphereSSL Dashboard. Create certs, View Certs

Verify Challenge:

Live updates on the whole verification process.

Manage:

Manage Certs, Toggle Auto Renew, Renew now, or Revoke a cert.

Release: SphereSSL v1.0

License

  • Open source (Business Source License 1.1). Non-commercial use is free, forever. If you want to use it commercially, you can ask.

Features / Roadmap

  • 14 DNS providers and counting (Cloudflare, Namecheap, GoDaddy, etc.)
  • Multi-user support, roles, and API key management
  • Local and remote install (use it just for your own stuff, or let your team manage all the certs in one place)
  • Coming soon: Linux tray app, native installers, more CA support, multi-provider order support, webhooks, and direct IIS integration

Who am I?

Just a solo dev who got tired of SSL being a pain in the ass or locked behind paywalls. I built this for my own projects, and I’m sharing it in case it saves you some time or headaches too.

It’s meant to be easy enough for anyone to use—even if you’re inexperienced—but without losing the features and flexibility power users expect.

Feedback, issues, PRs, and honest opinions all welcome. If you find a bug, call it out. If you think it’s missing something, let me know. I want this to be the last SSL manager I ever need to build.

WIKI: SphereSSL Wiki

Screenshots: Image Gallery

Not sponsored, no affiliate links, no “pro” version—just the actual project. Enjoy, and don’t let DNS drive you insane.

365
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Recent-Success-1520 on 2025-07-08 17:34:36+00:00.


Hey all,

Long time selfhoster here. I have been using Synology NAS backed to Google Drive for my storage needs.

I am setting up a 5 Node K8s cluster with intention of using Ceph. I have worked with Ceph at my work so know about it.

Do you trust your hardware with your data or you all backup to cloud as well? What do you use for 3-2-1 backup?

Hoping to understand the trend here

366
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/gregsadetsky on 2025-07-08 16:38:21+00:00.


Hey r/selfhosted,

My friend Antoine and I have spent the last 1.5 years building Disco, an open-source PaaS to scratch our own itch. We love the existing tools, but kept hitting specific walls.

tldr; We built an open-source, MIT-licensed PaaS that:

  • Lets you scale beyond a single server.
  • Uses API keys for team access, not SSH keys.
  • Has a simple CLI and web UI without overwhelming configuration.
  • Includes built-in database management (disco postgres create).
  • Is funded by optional managed services, so that the code can remain free and open.

The Backstory

For context, I was paying hundreds per month on Heroku and Render for hobby projects, while Antoine's client (Idealist.org) was getting hit with expensive staging environment bills. We looked for self-hosted alternatives, but found:

  • Dokku: Great, but locked us to single servers and required managing SSH access for teams.
  • Coolify: Powerful, but we found the sheer number of configuration options overwhelming.
  • Kamal: Brilliant for deployment, but we wanted integrated database management and other platform features built-in.

What is Disco?

Disco was built to fill that gap. It's designed to be a simple, scalable, and developer-friendly platform.

  • Scale Beyond One Server: Easily add and manage multiple servers in a cluster.
  • Simple & Secure Team Management: Give a teammate an API key to deploy. Revoke it just as easily. No more passing around SSH keys to production.
  • Fast Deploys: Thanks to Docker's layer caching, deploys are usually under 30 seconds.
  • "Just Works" Databases: When you need a quick database for a project, disco postgres create sets one up for you instantly.

We've been running a Raspberry Pi 5 (8GB RAM) at the Recurse Center and it's hosting 50+ web apps without breaking a sweat. Idealist.org moved their staging environments to their own infrastructure using Disco and saw their costs drop significantly.

Getting Started

Getting started is minimal. A typical FastAPI project needs a simple Dockerfile and an 8-line disco.json file. We have a tutorial for deploying that stack on any VPS (Digital Ocean, EC2, etc.).

Our Philosophy & Business Model

The project is MIT licensed because we want this to be a dependable self-hosting option with no lock-in.

To keep the project alive without burning out, we also offer managed services for teams who want to migrate off Heroku (to AWS, for example) without managing infrastructure themselves. Revenue from paid services goes directly into improving the open-source version for everyone.

If you've felt stuck between expensive PaaS bills and infrastructure complexity, we'd love for you to check it out and hear your thoughts. Happy to answer any questions!

Cheers

Links:

367
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BrokeMonke2077 on 2025-07-08 14:26:08+00:00.

368
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/BaseMac on 2025-07-08 14:05:40+00:00.


I've been totally fixated on the continuity problem with AI since I started working with it last year. Like everyone, I wanted to open each new conversation with a shared understanding of everything I'd ever discussed with Claude or Chat. I was constantly asking them to summarize conversations so I could paste them into the next chat. It was a pain in the ass, and each new conversation felt like a bad copy of the original. It wasn't just the content of the conversations that felt lost, it was the texture of it, the way we talked to one another.

Claude (my favorite LLM by a mile) doesn't have "memory" in the way that ChatGPT does, but it hardly matters because for anything more than remembering a few facts about you, Chat's memory basically sucks. What it remembers feels arbitrary. And even when you say, "Hey, remember this" it remembers it the way IT wants to in a file you can delete by scrolling through all its memories in a buried setting, but you can't edit them.

My friend Paul was having the same frustration at the same time. We were talking about it every time we hung out, and eventually he started building a solution for us to use. Once he had a working prototype, we met with amazing results right away.

What started as a personal tool has grown into this free, open source project called Basic Memory that actually works.

If you follow AI at all, you've heard a lot about Model Context Protocol (MCP) servers. Basic Memory is a set of tools used via MCP. In a nutshell, users connect it to their Claude Desktop (or Claude Code), and whatever notes app they like that handles Markdown. We use Obsidian. Basic Memory takes detailed notes on your AI interactions that you two can reference in the future. Imagine a stenographer sitting in on all your chats writing notes about everything that's said and saving them locally, on your computer, Everything stays on your machine in standard Markdown files - your AI conversation data never touches the cloud..

But what's really cool is that it's a two-way street. You can edit the notes, Claude can edit the notes, he can create new ones, and you can too. All of them become part of your shared memory and what he draws on for every future conversation. Then whenever you want to revisit an old conversation or project, Claude reads your shared notes, almost all of which he wrote himself in language both of you can understand.

It's completely self-contained. No external dependencies for data storage, no API keys for the memory system itself, no cloud services required. Just local files you control.

The difference is night and day. Instead of starting from scratch every time, Claude picks up exactly where we left off, even weeks or months later. Research projects actually build on themselves now instead of resetting with every conversation.

I made a (super basic, kind of awful) video showing how it works in practice. I'd love it if you check it out. We have a growing Discord community with a collection of avid users who have built wild new workflows around Basic Memory. It's been pretty cool seeing how people use it in ways that are way more sophisticated than anything we originally imagined. If you're working with AI regularly, it really does unlock so much more.

It's worth checking out if the context loss problem drives you as crazy as it drove us. I think you'll find it really helps.

Links:

·       GitHub repo (AGPL, completely free)

·       Installation guide

·       Discord

369
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/hjball on 2025-07-08 12:02:06+00:00.


It’s been a little over a month since I first posted here, and the response has been amazing. I’ve had a ton of great feedback and some very fair criticism too. Since then, we’ve shipped:

  • 🐳 Docker Compose support – spin it up easily with a vastly improved self-hosting guide
  • 🌍 Multi-language/localisation support – now available in 6 languages
  • 📝 Rich text editor - add formatting to your card descriptions
  • 📱 Mobile-first UI – much better experience on small screens
  • 🧩 Board templates – with presets available (custom templates coming soon)
  • 🔄 Simplified Trello integration – import boards with just a few clicks
  • 🔐 More login options – 15+ OAuth providers + email/password
  • 📬 Native SMTP support - BYO mail server
  • 🐞 Plus a load of bug fixes and polish

On the cloud side, we’ve seen 30,000+ cards created and hundreds of Trello boards imported already.

What’s next?

  • Card checklists (most requested feature!)
  • 🎨 White labeling support
  • ⚙️ More configuration options and settings
  • 💅 UI/UX enhancements and lots more bug fixes and polish

Big thanks to everyone who’s contributed code, reported bugs, suggested features, or helped spread the word - you’re helping make Kan better for everyone!

🌐 Website -> https://kan.bn/

📜 Changelog -> https://github.com/kanbn/kan/blob/main/CHANGELOG.md

🛣️ Roadmap -> https://kan.bn/roadmap

370
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/CheeseOnFries on 2025-07-07 19:06:47+00:00.


Several months ago I released Sprout Track, a self-hosted baby activity tracker I built as an alternative to the subscription-based apps on the market. The response from the community has been fantastic, with many of you deploying it for your own families.

Sprout track is still pretty new and the main requested features have been multi-family support and pre-built Docker images. Many of you wanted to host the app for friends and family members while keeping each family's data completely separate, and others wanted an easier deployment option. Version 0.92.0 delivers both - a proper multi-tenant architecture where each family operates in their own isolated environment with unique URLs and independent user management, plus official Docker images so you don't have to build from source.

What's new in v0.92.0:

·       Multi-family support - Each family gets their own /family/[slug] URL with completely isolated data

·       Enhanced authentication - JWT-based auth with family context

·       Backup/restore improvements - Import existing data during setup, automatic migration for older backups

·       Better real-time updates - Optimized API calls for a more responsive experience

·       Quality of life fixes - Pump log defaults, proper time handling, solid foods no longer affect feed timers

There are a lot more details in the (changelog)

I've also setup a demo website. Here is the link and the login information:

Family Manager Access:

Family Access:

  • URL: https://demo.sprout-track.com/ (select a family from the family selector)
  • Available login IDs: 010203 (1-3 ID's are randomly generated)
  • PIN: 111222

The demo is populated with semi-realistic 🙃 test data spanning multiple families and several days of baby tracking activities.

GitHub: https://github.com/Oak-and-Sprout/sprout-track

Try it with Docker: docker pull sprouttrack/sprout-track:0.92.0

I appreciate all the feedback from the community - you're helping make this better for everyone.

371
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/DGReddAuthor on 2025-07-08 09:24:05+00:00.


I'm well-versed in SPF, DMARC, etc. But at the end of the day, I can't do anything about OVH getting IP ranges blocked.

So, I figure I'll throw all my email at either Google or Microsoft. I'm convinced they're the only two players and block out any competitors by ensuring it's virtually impossible to stay deliverable to their IPs if you're not Google or Microsoft.

Or maybe it takes more effort than I'm willing to put in.

Can anyone point me at the process for migrating to either of these, and maybe a suggestion on which is better (if one stands out)?

I will only use them for email. I'll host my DNS records and point them to MS/Google etc. Previously I used imap2imap to migrate historical email, is it possible to use that?

372
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/david007co on 2025-07-07 22:43:20+00:00.


Just wanted to shout out a project I came across (and tested) recently which is Yamtrack.

It’s a self-hosted media tracker that lets you manage your watchlist, track history, and organize shows/movies with ease. It supports manual tracking, and also has impressive integrations with services like Plex, Emby, Trakt, Simkl, MyAnimeList, AniList, Kitsu, etc..

I imported my entire library from Trakt and Simkl without any issues perfectly matched all. The UI is clean and fast, and everything works. Easy Docker deployment, user accounts, etc...

I've tried most of the other self-hosted trackers like Movary, MediaTracker, Watcharr, etc. Yamtrack hits the sweet spot between simplicity and powerful features. It’s clear a lot of thought went into its design.

With Trakt getting more restrictive for free users lately, this is honestly the best alternative I’ve seen — and it gives you full control over your data.

Definitely feels like one of those hidden gems in the self-hosted space that deserves way more attention.

Would be great if other great apps (like Cinexplore for movie discovery), would start using Yamtrack for their backend.

Anyways, I feel people should give it a try:

GitHub: https://github.com/FuzzyGrim/Yamtrack

Would be great to see more people use it and maybe even contribute, as the dev has been active and responsive! Thanks u/haumeaparty

373
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ralsina on 2025-07-07 13:37:49+00:00.


I am publishing the first preview version of KV, a written-from-scratch remote KVM solution.

You can use it to remotely control a server (or any computer) and it will give you a webpage where you can see the video output and emulate a keyboard and mouse. You can upload a disk image to the KVM and provide it as a USB mass storage device to the server. I suppose you can even install the operating system in the server this way.

It supports cheap and popular USB video capture devices. You need a SBC with a OTG port.

https://preview.redd.it/9130k4c8ggbf1.png?width=1543&format=png&auto=webp&s=bb98c2d1e419ff6c0053152175d71d2fcff40f67

There is a small video demonstrating the app here: https://youtu.be/_NCVytMPW18

And of course it's MIT-licensed and you can get the code at https://github.com/ralsina/kv

374
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/cogwheel0 on 2025-07-07 13:35:29+00:00.

375
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FIFATyoma on 2025-07-07 11:52:26+00:00.


Hey r/selfhosted! 👋

I love ThePosterDB but they are still ages away from being an alternative image provider for Jellyfin unfortunately, so I build an "alternative" for that.

What it does

Jellyfin Poster Manager automatically searches ThePosterDB for high-quality movie and TV series posters and uploads them directly to your Jellyfin server. No more manual searching, downloading, and uploading!

✨ Key Features

  • 🚀 Batch Operations: Process your entire library or filter by movies/TV series
  • 🎯 Smart Filtering: Only update items without posters, or replace everything
  • 🔍 Manual Selection: Browse multiple poster options when you want control
  • ⚡ One-Click Setup: Simple configuration

🖼️ Screenshots

https://preview.redd.it/m828i5lcxfbf1.png?width=1426&format=png&auto=webp&s=39f756e70bde380b999bdb259acca0d93d67961a

https://preview.redd.it/5vvdge6txfbf1.png?width=1391&format=png&auto=webp&s=bcae602c3dc9b9ec8a01a6d17d63d9bbb3f64720

The interface shows your library with missing posters highlighted, and you can either:

  • Auto-process items in bulk (recommended for large libraries)
  • Manually select from multiple poster options for specific items

GitHub: https://github.com/TheCommishDeuce/TPDB_JellyfinPosterManager

view more: ‹ prev next ›