Self-Hosted Alternatives to Popular Services

224 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
2151
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/performation on 2025-01-18 09:08:40+00:00.


Hello everyone,

after using my homelab for about half a year with a VPN I decided to expose some services directly. I have read a good amount of stuff on the topic and want to double check I have not missed any major points.

I know there will be a lots of comments saying I should not do this at all if I have to ask or just use a VPN or cloudflare tunnel but I do not want to do that. I am just looking for some friendly advice on best practice.

So the plan is: Opening and redirecting port 443 in my router to my VM. The VM is running on proxmox in a isolated VLAN. It is a very minimal install which apart from docker, git and nfs is running only the bare minimum. Firewall is handled by proxmox, it is set to allow only port 443 and my SSH from internal IPs from my admin VLAN.

The VM has docker running in rootless mode with a total of 4 services I want to expose + Traefik and Authentik. Traefik drops all traffic not pointing to the correct sub-domains. I have set the usual HTTP headers, rate limiting, geo blocking etc. Authentik accepts logins only via password and 2FA. I have also set up crowdsec, fail2ban on both my router and the VM and watchyourlan. SSH login is key only but shouldn't be possible from an external IP anyway.

Updates to proxmox, the VM and the docker containers will be done manually a few times a week for now. Last thing I am currently working on is loki + grafana for access logs so I can monitor things myself.

There are automatic backups of all data and configs onsite and offsite, so in case of disaster I am going to wipe the VM and restore a backup.

So what did I miss? TIA to anyone.

2152
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/lokwaniyash on 2025-01-18 03:23:55+00:00.


Hey everyone, Since I set sonarr and radarr up to manage my media, It has been a breeze, but there have been a few problems, one of them being that my indexers keep showing as failed, after looking into this apparently if an indexer doesn't give results, it is assumed to be "not working" and marked as disabled, for which we see an error that indexer is disabled (sometimes for over 6 hours), and since sonarr and radarr doesn't make more frequent health checks on those, its possible all indexers are not being used and we may not get results that are as good, i figured a solution would be to make an api callout every few minutes to check those indexers health, which worked pretty great, but i still saw sometimes that even after that, sometimes indexers might not be available, my requirement was, that even if the indexer is not working, force sonarr and radarr to use them, to which I thought of checking the radarr.db/sonarr.db database file(I'm on windows so I found it in C:/ProgramData/ replace AppName with sonarr or radarr), and see if i can find any more details there, where i found a table called "IndexerStatus" which has escalation level, basically if an indexer fails continuously, it'll be disabled for even longer, so I made a trigger which checks any updates on that table and make sure the escalation level stays at 0, and the delay until column stays null as well, here's the sql i wrote (don't mind me, first trigger)

CREATE TRIGGER prevent_change_to_column AFTER UPDATE ON IndexerStatus FOR EACH ROW BEGIN -- Ensure the column always has the value 0 UPDATE IndexerStatus SET EscalationLevel = '0', DisabledTill = NULL WHERE rowid = OLD.rowid; END

and after adding this to the db and saving it, haven't seen any failures so far, and have been noticing sonarr and radarr using all indexers, I'll keep you guys updated to see if this gives any problems

2153
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/IAmOmnificent on 2025-01-17 22:23:03+00:00.


Hey guys, so I have a VM setup in Proxmox to handle all my media needs. It runs the following: Jellyfin, Radarr, Sonarr, Bazarr, Lidarr, Prowlarr, Transmission and Jellyseerr. All the docker images are from LinuxServer except Jellyseerr.

The resources I allocated to the VM are: 4vcpu, 12GB RAM, Intel Arc A310

On idle I am getting about 2.8GB RAM usage for all those services. However, when I start streaming on Jellyfin (~2 streams, both transcoding), the RAM usage spikes up to almost the maximum (some media just 1 stream is enough), causing the VM to be unresponsive at times. The media does play but trying to load another instance of Jellyfin in another browser for example will just load continuously.

Stopping the media streams an leaving it for a bit (~3-5mins) will bring everything back to normal.

I have no idea what is going on and would love to see if anyone else had this issue. My previous media server was running on an old laptop with 6th Gen Intel CPU so the best I could do was get 1 stream up (transcoding) and even that would stress the iGPU. So I didn't get to this issue. However, given the A310 can handle a good chunk of streams, this issue was unexpected.

Any insights or tips would be great. Cheers!

2154
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Wild_Magician_4508 on 2025-01-17 17:18:58+00:00.


I'm always in search of some good blogs about self hosting. A lot of the ones I find in searches are old or no longer maintained with fresh content. It seems people get real excited about selfhosting, write a bunch of killer tutorials, and then I guess the excitement wears off, and they no longer keep the blog fresh.

I know about noted.lol , selfh.st , selfhosted.show , DigitalOcean and Linode have some pretty good articles. There are some decent articles on Medium. Howtoforge is good. www.linuxbabe.com would be another good resource.

I'm looking for inspiration and education.

2155
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/I-am-an-adult- on 2025-01-17 19:08:35+00:00.


Hello! I have a question about understanding how things work.

I generally know how arr applications function. For example, I use qBittorrent, radarr, prowlarr, jellyseer, and jellyfin.

With this setup, I can select movies through jellyseer, which are then automatically sent from radarr to prowlarr for download via qBittorrent.

Now onto whisparr. Whisparr is similar to radarr, but for adult content. Is there a "jellyseer" equivalent for whisparr? Or how do I tell whisparr what I want to watch? Do I manually enter video titles? And where do I get those names from? Thank you:-)

2156
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ponzi_gg on 2025-01-17 18:08:15+00:00.

2157
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/shol-ly on 2025-01-17 12:53:27+00:00.


Happy Friday, r/selfhosted! Linked below is the latest edition of This Week in Self-Hosted, a weekly newsletter recap of the latest activity in self-hosted software and content.

This week's features include:

  • Self-hosted social media platforms gaining traction
  • Software updates and launches
  • A spotlight on Coolify - a self-hosted alternative to Heroku and Netlify
  • A ton of great guides from the community (including this subreddit!)

In this week's podcast episode, I'm joined by guest co-host Elliot Courant - the developer of the recently-launched budgeting app Monetr.

Thanks, and as usual, feel free to reach out with feedback!


Newsletter | Watch on YouTube | Listen via Podcast

2158
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Friendly_Barracuda38 on 2025-01-17 11:22:43+00:00.


Hello guys, I have a question which I have not seen asked a lot (maybe I am wrong)

Which terminal do you guys use for your self-hosting needs such as SSH or just general server commands?

Please also include which operating system you are on.

2159
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/OriginalPlayerHater on 2025-01-17 06:03:35+00:00.


Just sharing my experience tonight in case someone else runs into it.

The problem:

I moved computers and my Immich installation was on an external, but I lost the original folder where PostgreSQL was configured. No database means my library was showing empty.

The solution:

Luckily, the Immich main folder has a backup folder with the database SQL files. This can be used to restore the database.

Steps to fix:

  1. Dropped the existing database in PostgreSQL (immich):

Run the following command to drop the old database:

docker exec -it immich_postgres psql -U postgres -c "DROP DATABASE IF EXISTS immich;" 2. Restored from the backup using the SQL dump file:

Assuming your backup file is located at E:/immich/backups/immich-db-backup-1736676000009.sql, use this command to restore:

cat E:/immich/backups/immich-db-backup-1736676000009.sql | docker exec -i immich_postgres psql -U postgres -d immich 3. Ran into an issue with password authentication — Immich couldn't connect to PostgreSQL:

The error was related to incorrect password authentication. To fix this, I reset the password for the postgres user:

docker exec -it immich_postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';" 4. Updated the .env file with the correct password:

Ensure the .env file has the correct password:

DB_PASSWORD=postgres 5. Restarted the containers to apply the changes:

Run these commands to restart your containers:

docker-compose down

docker-compose up -d

Result:

Everything's back up and running smoothly! No data lost, library restored, and Immich is working perfectly again.

2160
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/feror_YT on 2025-01-16 22:32:33+00:00.


Hey guys 👋

I am currently hosting a Gitlab instance but I find it to be a bit slow… I found out about Gitea a couple of days ago and it looks pretty damn fast.

The main point that I’m trying to make is that I don’t understand why Gitea would have such a small market share compared to GitLab even though it looks so adequate.

So I was wondering if any of you have tried both and can give me their impressions ?

For context, I don’t expect to have many users (less than 10 most likely), and I would like to be able to integrate some CI/CD stuff with it for my projects. I don’t really need most of the project management stuff as I use external tools anyway.

Cheers, Feror.

2161
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/GreatRoxy on 2025-01-16 21:19:27+00:00.


Hello,

I just wanted to let you know about something serious I came across. While using zipline, I found a big security issue with the OAuth2 setup (specifically with Google), and it’s super important to update right away to keep your accounts safe.

Vulnerability Details:

  • Affected Versions: Anything past v3.6.0, including v3.7.10.
  • Impact: An issue in the OAuth2 fallback logic allowed account hijacking. If two Google accounts share the same username prefix (e.g., username@gmail.com and username@domain.com), they could end up pointing to the same account in Zipline. This means someone could easily access another user’s data.
  • Affected Features:
    • Users who enabled the following settings are especially vulnerable:
        FEATURES_OAUTH_LOGIN_ONLY=true
        OAUTH_BYPASS_LOCAL_LOGIN=true

These settings, which should increase security by disabling password logins, unfortunately weakened security in this case due to the OAuth fallback logic issue.

What You Should Do:

  • Update Immediately: Upgrade to the latest version of Zipline (v3.7.11 or higher) to ensure your accounts are secure.
  • If You’re Not Using OAuth2: You’re safe, but still consider updating for other improvements.

My Experience:

I discovered this issue and reported it to the Zipline team via their GitHub repository. I’m happy to say that the developer quickly acknowledged the problem and implemented a fix in record time. The latest release (v3.7.11) resolves the issue, so it’s critical for users to update immediately.

It’s quite surprising that such a critical issue existed. The fallback logic essentially bypassed a key security mechanism, leaving users' data at risk.

For those interested, you can view the updated code that addresses this issue here: GitHub Commit Fix

2162
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/invaluabledata on 2025-01-16 20:52:14+00:00.


I have invested much time in researching what software to use for backing up my invaluable data -- eponymous pun intended. My two final contenders are Duplicacy and Borg. They both seem to have long-term histories and thus are surely stable and reliable. They also have the same deduplication efficiency strategy. If you have an opinion on this or use some other software, would you please share your wisdom? Thank you!

2163
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Fiji236 on 2025-01-16 20:51:22+00:00.


Ben Busby's post on Github

Really depressing reminder that companies like Google can just brick entire projects with a simple policy change.

*Edit: I can't express my disappointment enough. It has been integral to my work this past year, where relevance was far more important than popularity and SEO.

2164
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/hernil on 2025-01-16 19:29:12+00:00.


Inspired by other discussions here and elsewhere and examples like this I decided to make a plan for disaster recovery.

I wanted to cover scenarios like completely loosing all my physical on-site tech - including phone, wallet etc. I also wanted it to be the place for my wife or other close family to start in case of something happening to me.

The plan is found here and I also wrote a blog post explaining a bit more of what is happening.

The tl;dr: is gpg private keys on physical Yubikeys to unlock publically hosted data blobs.

Hopefully someone finds this useful, and feedback is very welcome!

2165
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ohv_ on 2025-01-16 18:10:48+00:00.


Noticed my Whoogle not working.

2166
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/nate4t on 2025-01-16 17:37:48+00:00.


Hey, I'm a Dev Advocate with CopilotKit, a self hostable, open-source framework for building in-app AI assistants and full stack agent applications.

We are excited about a recent collaboration with LangChain to build an Agentic Generative UI frontend for a LangGraph backend. We recently launched CoAgents, a frontend framework that allows developers to integrate LangGraph agents into full-stack apps easily.

We have released v0.3, which introduces some major developer quality-of-life improvements to CoAgents! Incorporating the great feedback we got from the community with the v0.1 and v0.2 releases.

We anticipate v0.3 to evolve into the 1.0 release in the near future.

Here’s what CoAgents v0.3 brings to the table:

  1. Simpler message syncing:
    1. LangGraph agent messages and CopilotKit messages are always automatically kept 100% in sync
  2. All LangGraph agent tool calls are emitted by default:
    1. no need to explicitly emit tool calls in the agent code. If the frontend does not handle the calls, there will simply be no effect.
  3. Support for “catch-all” tool calls rendering:
    1. you can provide a default “catch all” generative UI render function for all agent tool calls

We're fully open-source (MIT), check out our GitHub: 

2167
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Kryptonh on 2025-01-16 16:31:03+00:00.


We are excited to announce our first release of the year.. Why not start with languages 😀?

For the uninitiated, Docmost is an open-source collaborative wiki and documentation software. It is an open-source alternative to Confluence and Notion.

In Docmost v0.7, we have introduced internationalization 🌏. This has been in works for the past few months.

With this release, adding support for new languages becomes much easier. Let us know which languages you would like to see supported next!

Highlights of this release

  • Language translations
    • German
    • French
    • Portuguese (BR)
    • Chinese
  • Support for pasting markdown
  • Google Sheets embed
  • Multiple improvements to the editor

Full release notes: 

Website: https://docmost.com/

Docs: 

Github: 

2168
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/the-head78 on 2025-01-16 15:12:18+00:00.


FYI - As some of you might also be using rsync for Backup or similar Jobs, you should Check that you are using Version 3.4.0 at minimum. If you have an older Version you should update.

In all Prior Versions there are multiple Vulnerabilities that could compromise your system. Beware - there are also other Backup / sync applications utilizing rsync. Just Check your Version.

2169
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/esiy0676 on 2025-01-15 19:58:53+00:00.


Every now and then there's a post about hosting or not hosting email per se. For sending out or delivering. This is NOT such one.

I am wondering what people use for storing emails, whether they got pulled or delivered or otherwise reached their system.

Suppose you have downloaded entire mailbox content off a service like Gmail, it comes as mbox. You can make it a Maildir. You can e.g. put Dovecot over it and have it available via IMAP to whichever clients, but it also makes it horrible to search within or organise.

You could perhaps forward it to something like Matrix (or Mattermost, etc.) via a bridge and get some of the database benefits, but then it's not actionable, as an email and what about exports back to e.g. that mbox if need be one day.

So, how do you store your mailboxes, long-term?

2170
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Grouchy-Journalist99 on 2025-01-16 00:15:04+00:00.

2171
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Flaminel on 2025-01-15 21:58:02+00:00.


Hi everyone, I hope you week is going well!

✨ I'm excited to announce that cleanuperr v1.4.0 is now out, which includes the much requested support for Lidarr.

cleanuperr is a tool for automating the cleanup of unwanted files and downloads for Sonarr, Radarr, and now Lidarr.

  • Weird file extensions? Cleaned! 📄🧹
  • Failed imports? Cleaned! 🚫🧹
  • Stalled downloads? Cleaned! 🕒🧹
  • Ignore private torrents? Not cleaned! 🔒

Supported download clients:

  • none
  • qBittorrent
  • Deluge
  • Transmission

What changed since v1.3.0:

  • Created an official Unraid template. 🗄️
  • Added Lidarr support. 🎵
  • Changed the way blocklists work (breaking change), due to new Lidarr support. ⚠️
  • Added option to not use a download client. This is useful if you want to use cleanuperr to remove failed imports, even if you're using Usenet.
  • Added the option to ignore private torrents when looking for failed imports, stalled downloads or weird extensions. 🔒
  • Added the option to ignore failed imports based on message patterns. 🔒
  • Some other small things. 🤏

👉 Check out the project here: flmorg/cleanuperr

💬 Got feedback or questions? Join our Discord server, create a GitHub issue or let me know in the comments!

💬 Are the docs unclear? Let me know how I can improve them!

🔜 What's next?

  • Readarr support?
  • Persistent strikes?
  • API to check the number of strikes of a download?

You tell me what's next! 🔜 What would you like cleanuperr to do for you in the future? I would love to hear your thoughts! 🤩

2172
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MohamedBassem on 2025-01-15 22:29:08+00:00.


This post could have been about how hoarder reached 10k stars on Github, or about how we spent a day in the front page of hackernews. But unfortunately, it's about neither of those. Today, I received a cease and desist from someone holding the "Hordr" trademark claiming that "Hoarder" infringes their trademark. Quoting the content of the letter:

In these circumstances, our client is concerned, and justifiably so, that your use of a near identical name in connection with software having very similar (if not identical) functionality gives the impression that your software originates from, is somehow sponsored by, or is otherwise affiliated with our client.

They're asking to cease and desist from using the "Hoarder" name, remove all content of websites/app store/github/etc that uses the name "Hoarder" and the cherry on top, "Immediately transfer the hoarder.app domain to our client" or let it expire without renewing it (in Feb 2027). They're expecting a response by the 24th of Jan, or they're threatening to sue.

For context, I've started developing Hoarder in Feb 2024, and released it here on reddit on March 2024. I've never heard about "Hordr" before today, so I did some research (some screenshots along the way):

  1. They have a trademark for "Hordr" registered in Jan 2023.
  2. They registered the domain hordr dot app in 2021.
  3. Searching google for their domain shows nothing but their website, their parent company and an old apk (from Jun 2024). So they have basically zero external references.
  4. They've had their 2.0 release on the app store on the 3rd of Jan 2025 (2 weeks ago), with "AI powered bookmarking". The release before that is from Feb 2023, and says nothing about the content of the app back then.
  5. Their apps are so new that they are not even indexed on the play store. Google says they have "1+" downloads.
  6. I found an apk on one of the apk hosting sites from Jun 2024, which shows some screenshots of how the app looked back then.
  7. Wayback machine for the hordr dot info shows a references from 2023 to some app in the app/play store. The app itself (in app/play store) is unfortunately not indexed.

So TL;DR, they seem legitimate and not outright trademark trolls. Their earliest app screenshots from June 2024 suggest their current functionality came after Hoarder’s public release. Despite their claims, I find it hard to see how Hoarder could cause confusion among their customers, given they appear to have very almost none. If anything, it feels like they’ve borrowed from Hoarder to increase the similarity before sending the cease and desist.

Hoarder is a side project of mine that I've poured in so much time and energy over the last year. I don't have the mental capacity to deal with this. I'm posting here out of frustration, and I kinda know the most likely outcome. Has anyone dealt with anything similar before?

2173
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/drivingLights on 2025-01-15 20:26:06+00:00.


TLDR

Developing a open source self-hostable period tracker with e2e encrypted device syncing and cycle sharing. Any suggestions or input will be huge help!

Why?

Currently most period trackers out there are entirely proprietary. While many make promises that they encrypt your data or wont share it with law enforcement we all know that those promises are often empty. I wont get political but we can agree that privacy especially biological privacy is sacred.

My solution, both server and client, will be open source, transparent and verifiablely end-to-end encrypted. There are already pen source trackers out there (such as Drip) but these also have their own issues.

  1. Many are not very feature rich, not as easy to use or unattractive.

  2. None that I have seen support device syncing or cycle sharing with friends and partners.

1.0 features

Features that I want stable and ready for the 1.0 release:

  • Basic tracking with both pre-baked symptom logging as well as custom symptoms and notes

  • Cycle predictions

  • Cycle sharing – Allow friends, family or partners to be able to view each-others cycles (similar to Stardust)

  • End-to-end encrypted. The entire app and server are being built from the ground up with encryption and secure sharing in mind.

  • The client will be local first, with connecting to a server simply providing additional features.

Development

The server is being coded in Java and postgresSQL database. The client is being developed in Dart and Flutter with SQLite being used for local data. I’m not very experienced with UI or app development so I am learning Dart/Flutter as I go but intend for everything to be polished and best practice.

This is in very early development aiming for a beta client and server to be out by the end of the year.

Disclosure

Yes I’m a cis man. Most of my inspiration so far has come from my female peers. I know statistically this community is majority male as well but any input on often missing features or something you would like to see in the final product please let me know. Any notes or comments can help, especially where I could potentially have blind spots.

2174
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/aimo_dg18 on 2025-01-15 18:47:36+00:00.


Hey!

I just want to announce that I decided to publish kitshn (mobile client for Tandoor recipes) to the Apple App Store. After some back and forth with the Support Team, I was now able to publish it. 🥳 Please feel free to report any issues or ideas if you'd like :)

kitshn on GitHub

2175
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/vinioyama on 2025-01-15 15:26:10+00:00.


Hi! I'm launching my Self-Hosted App for an All-In-One solution to manage projects, track time and focus. My goal is to make it simple yet effective.

I’ve always believed it’s not the tool that makes a project successful but I’ve always wanted one that aligned with my vision: a mix of Trello, ClickUp, Toggl and Focus-Oriented Tools.

So I'm really happy to have started this project!

I hope you like it. Any Idea or Feedbacks are welcome.

view more: ‹ prev next ›