Self-Hosted Alternatives to Popular Services

222 readers
3 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1201
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/IsaacTM on 2025-04-09 13:35:20+00:00.


I have around 20 Docker containers and I simply want to setup internal DNS for them so I don't have to remember ports. What's the easiest, safest way to go about doing that? If you can provide a solution that uses its own Docker container and has ELI5-type documentation too, that'd be great.

Thanks in advance for any help you can provide.

1202
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/zuus on 2025-04-09 12:44:04+00:00.

1203
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Another__one on 2025-04-09 12:03:27+00:00.


Hello everybody. Recently I showed here my project - Anagnorisis - a system that aims to provide a completely local alternative to the cloud based recommendation services, such as Spotify or Youtube. If you haven’t heard about it yet, you can watch this videos to get a general gist of it:

Anagnorisis: Music Module Preview (v0.1.6)

Anagnorisis: Images Module Preview (v0.1.0)

Or visit the github page:

Last time I showed the project here, despite the general positive feedback, there were several instances where people struggled to recreate the local environment necessary to run the project. To make the set up easier I provided a Docker container alongside the project for simple set up and use. I hope this will help. Feel free to ask any questions and provide your feedback here.

1204
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Unified-Field on 2025-04-09 08:14:01+00:00.

1205
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/speculatrix on 2025-04-09 08:52:55+00:00.


Building the New Internet, together — our Series C and what's next

Tailscale has raised $160 million USD ($230 million CAD) in our Series C, led by Accel with participation from CRV, Insight Partners, Heavybit, and Uncork Capital. Existing angel investor George Kurtz - CEO of Crowdstrike is also included in this round, as well as Anthony Casalena - CEO of Squarespace, who joins as a new investor for Series C.

There’s a lot packed into that sentence. But the real question is — why should you care?

$160 Million Series C

When we started Tailscale in 2019, we weren't even sure we wanted to be a venture-backed company. We just wanted to fix networking. Or, more specifically, make networking disappear — reduce the number of times anyone had to think about NAT traversal or VPN configurations ever again.

That might sound simple, but it wasn’t. Here we are, six years later, and millions of people rely on Tailscale every day, connecting their homelabs, their apps, their companies, their AI workloads. Some use it because they love networking and want better tools. Many use it because they have better things to do – they don’t want to think about networking at all.

Either way, the outcome is the same: things connect, securely and privately, without the traditional headaches. Identity first, Decentralized, Empowered

Even though we already had a long runway, we raised this Series C because we realized the world had started raining opportunities. We want to go faster where it matters:

  • Removing friction
  • Scaling the network without scaling complexity
  • Making identity, not IP addresses, the core of secure connectivity

The Internet wasn’t built with identity in mind. It was built for location — packets sent between machines, not people. Everything that came after — VPNs, firewalls, Zero Trust — are attempts to patch over that original gap.

We think there’s a better way forward. We're calling it identity-first networking.

When you connect to something with Tailscale, you’re not just an IP connecting to a server at some IP. You’re connecting to your app, your teammate, your service — wherever it happens to be running right now. That’s how it should work. Product Innovation, Expansion, Team Growth

why now why raise this much

The last year made the need for this even more obvious. The AI industry, in particular, is struggling to rapidly mature its underlying infrastructure. Connecting GPUs across clouds, securing workloads across continents, migrating between cloud providers — it’s messy, it’s hard, and it breaks all the time.

A surprising number of leading AI companies — Perplexity, Mistral, Cohere, Groq, Hugging Face — are now building on Tailscale to solve exactly this.

It’s not just AI. Companies like Instacart, SAP, Telus, Motorola, and Duolingo and thousands of others use Tailscale to make their hybrid, remote, and cloud networks sane again.

This new funding helps us support all of that, faster. We're going to grow our engineering and product teams to unlock more markets faster. We're also investing further in our free support for free customers promise and our backward compatibility forever platform. Business is booming, and taking investment now lets us stay focused on making the network just work, whether you’re a startup, a Fortune 500, or a person running a Minecraft server. Accel, CRV, Heavybit, Insight Partners, Uncork

who's behind this round We’re lucky to have Accel’s Amit Kumar — who led our Series A — leading this round too, now from their growth fund. And we’re excited to welcome Anthony Casalena of Squarespace, alongside returning investors CRV, Heavybit, Insight, and Uncork, and George Kurtz - CEO of Crowdstrike.

The mix here matters. These are people who understand that the network is the right place for the security and identity layer. The boundary is shifting from the datacenter to the device — and from the device to the person holding it, or the container running on it. Connected Nodes

Thanks for being here

We wouldn’t be at this point without the thousands of businesses — and the millions of people — who've bet on us so far. You believed networking could be better, even when you didn’t want to have to think about it.

That’s fine. We think about it so you don’t have to.

Thanks for being part of this. More soon.

— Avery


sorry for the page mangling

1206
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/dirky_uk on 2025-04-08 20:58:11+00:00.

1207
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/zykooo on 2025-04-09 06:36:39+00:00.


I want to share my excitement about my latest self-hosting achievements with you.

Over the past few months, I’ve learned a lot about self-hosting. I figured out how to configure Frigate with my PoE cams, set up Ollama and Open WebUI, Jellyfin, Audiobookshelf, and more.

I managed to set up AdGuard Home with some DNS rewrites, bought a domain, configured NGINX Proxy Manager, and set up 20+ proxy hosts with SSL certificates. I even figured out how to auto-renew the certs using my domain provider’s API.

That part was tricky, but I learned a ton in the process.

Then I decided it was time to set up a VPN… oh boy.

It took me hours to realize my ISP (Starlink) uses CGNAT, so all the DDNS setup I had done was completely useless… :D

Well, not entirely — I learned a lot again.

After some research and with the help of my AI companion ChatGPT, I came up with a plan: I set up a Raspberry Pi with WireGuard as a relay and connected it to a WireGuard instance on a small VPS.

I actually got them talking to each other — and when I connected my first client, I finally understood why some people love Dark Souls. I felt like I had beaten the hardest boss.

Then I even installed WGDashboard, and it blew my mind.

Somewhere along the way I managed to completely lock myself (and all my devices) out due to some stupid mistakes… but hey — Dark Souls, right?

Self-hosting is awesome. I hate it. But it’s awesome.

1208
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/interestingsouper on 2025-04-09 03:28:38+00:00.

1209
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kmisterk on 2024-04-19 17:45:57+00:00.


Good Morning, /r/selfhosted!

Quick update, as I've been wanting to make this announcement since April 2nd, and just have been busy with day to day stuff.

Rules Changes

First off, I wanted to announce some changes to the rules that will be implemented immediately.

Please reference the rules for actual changes made, but the gist is that we are no longer being as strict on what is allowed to be posted here.

Specifically, we're allowing topics that are not about explicitly self-hosted software, such as tools and software that help the self-hosted process.

Dashboard Posts Continue to be restricted to Wednesdays

AMA Announcement

~~The CEO~~ a representative of Pomerium (u/Pomerium_CMo, with the blessing and intended participation from their CEO, /u/PeopleCallMeBob) reached out to do an AMA for a tool they're working with. The AMA is scheduled for May 29th, 2024! So stay tuned for that. We're looking forward to seeing what they have to offer.

Quick and easy one today, as I do not have a lot more to add.

As always,

Happy (self)hosting!

1210
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kmisterk on 2019-05-25 01:29:15+00:00.


Welcome to /r/selfhosted!

We thank you for taking the time to check out the subreddit here!

Self-Hosting

The concept in which you host your own applications, data, and more. Taking away the "unknown" factor in how your data is managed and stored, this provides those with the willingness to learn and the mind to do so to take control of their data without losing the functionality of services they otherwise use frequently.

Some Examples

For instance, if you use dropbox, but are not fond of having your most sensitive data stored in a data-storage container that you do not have direct control over, you may consider NextCloud

Or let's say you're used to hosting a blog out of a Blogger platform, but would rather have your own customization and flexibility of controlling your updates? Why not give WordPress a go.

The possibilities are endless and it all starts here with a server.

Subreddit Wiki

There have been varying forms of a wiki to take place. While currently, there is no officially hosted wiki, we do have a github repository. There is also at least one unofficial mirror that showcases the live version of that repo, listed on the index of the reddit-based wiki

Since You're Here...

While you're here, take a moment to get acquainted with our few but important rules

When posting, please apply an appropriate flair to your post. If an appropriate flair is not found, please let us know! If it suits the sub and doesn't fit in another category, we will get it added! Message the Mods to get that started.

If you're brand new to the sub, we highly recommend taking a moment to browse a couple of our awesome self-hosted and system admin tools lists.

Awesome Self-Hosted App List

Awesome Sys-Admin App List

Awesome Docker App List

In any case, lot's to take in, lot's to learn. Don't be disappointed if you don't catch on to any given aspect of self-hosting right away. We're available to help!

As always, happy (self)hosting!

1211
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/User9705 on 2025-04-08 16:51:23+00:00.


Hey r/selfhosted community!

I wanted to share a tool I created that has completely changed how I manage my Sonarr library, and might solve some frustrations you've experienced too.

GITHUB:

The Problem Huntarr Solves

Have you ever:

  • Added a bunch of shows only to find Sonarr leaving many episodes "missing"?
  • Upgraded your quality standards and now have hundreds of episodes below cutoff?
  • Wanted a way to gradually improve your library without babysitting Sonarr?
  • Hit indexer rate limits when manually triggering too many searches?

Sonarr is excellent at managing your library, but it lacks a built-in way to continuously hunt for missing episodes or quality upgrades without manual intervention. That's where Huntarr comes in.

What Huntarr-Sonarr Does

Huntarr is a companion app that works alongside Sonarr to:

  1. Find Missing Episodes: Automatically identifies and searches for episodes marked as "missing"
  2. Upgrade Quality: Hunts for better versions of episodes below your quality cutoff
  3. Respect Rate Limits: Uses configurable delays between searches to prevent overloading indexers
  4. Distribute Searches: Randomly selects different shows and episodes each cycle to ensure everything gets attention

Web Interface with Real-Time Logs

Huntarr includes a clean web interface that lets you monitor activity and adjust settings on the fly:

Configure all options directly from the browser, no restart required:

Key Features

  • 🔄 Continuous Operation: Runs indefinitely until manually stopped
  • 🎯 Dual Targeting: Processes both missing episodes and quality upgrades
  • 🎲 Random Selection: Distributes searches across your library (or sequential if preferred)
  • ⏱️ Throttled Searches: Configurable delays to respect indexer limits
  • 🌐 Web UI: Real-time log viewer with day/night mode and settings management
  • 💾 Persistent Storage: All settings and state are saved and persist across container restarts
  • 🔮 Future Episode Skipping: Skip searching for episodes that haven't aired yet
  • 💿 Reduced Disk Activity: Optional setting to skip series refresh operations

How It Works Behind the Scenes

Huntarr runs in cycles:

  1. Find Missing: Identifies shows with missing episodes and triggers searches for a configurable number
  2. Upgrade Quality: Finds episodes below cutoff and searches for better versions
  3. Track Progress: Remembers which shows/episodes it has processed to avoid repetition
  4. Reset & Repeat: After a configurable period, it resets its tracking and starts fresh

The "set and forget" design means you can leave it running in the background, and it will steadily improve your library over time without manual intervention.

Related Tools

I've also created Huntarr editions for other *arr apps:

Links & Resources

Happy to answer any questions in the comments!

1212
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Available-Advice-294 on 2025-04-08 19:56:36+00:00.

1213
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/nvprt on 2025-04-08 21:54:58+00:00.

1214
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Glum-Position-8155 on 2025-04-08 16:17:28+00:00.


I wanted to share my side project, Deceptifeed, available here:

It's essentially multiple low-interaction honeypot servers with an integrated threat feed. The honeypots (fake/deceptive servers) are set internet-facing - the threat feed kept private for internal security tools. If an IP address from the internet interacts with one of your honeypots, it's added to the threat feed.

The threat feed is served over HTTP with a simple API for retrieving the data. Honeypot logs are written in JSON format, if needed. There's also a simple web interface for viewing both the threat feed data and honeypot logs.

The purpose of the threat feed is to build an automated defense system. You configure your firewalls to ingest the threat feed and automatically block the IP addresses. Outside of the big enterprise firewalls (Cisco, Palo Alto, Fortinet), support for ingesting threat feeds may be missing. I was able to get pfSense to auto-block using the threat feed, but they only support refreshing once every 24 hours.

I know this community has a lot of home-labbers. If your servers don't use your own public IPs, this project probably isn't for you. But if any of this sounds interesting, check it out. Thanks!

1215
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/yoracale on 2025-04-08 17:25:01+00:00.


Hey guys! A few days ago, Meta released Llama 4 in 2 versions - Scout (109B parameters) & Maverick (402B parameters).

  • Both models are giants. So we at Unsloth shrank the 115GB Scout model to 33.8GB (80% smaller) by selectively quantizing layers for the best performance. So you can now run it locally!
  • Thankfully, both models are much smaller than DeepSeek-V3 or R1 (720GB disk space), with Scout at 115GB & Maverick at 420GB - so inference should be much faster. And Scout can actually run well on devices without a GPU.
  • For now, we only uploaded the smaller Scout model but Maverick is in the works (will update this post once it's done). For best results, use our 2.44 (IQ2_XXS) or 2.71-bit (Q2_K_XL) quants. All Llama-4-Scout Dynamic GGUF uploads are at:
  • Minimum requirements: a CPU with 20GB of RAM - and 35GB of diskspace (to download the model weights) for Llama-4-Scout 1.78-bit. 32GB unified RAM (Apple) will get ~3 token/s. 20GB RAM without a GPU will yield you ~1 token/s. Technically the model can run with any amount of RAM but it'll be slow.
  • This time, our GGUF models are quantized using imatrix, which has improved accuracy over standard quantization. We utilized DeepSeek R1, V3 and other LLMs to create large calibration datasets by hand.
  • We tested the full 16bit Llama-4-Scout on tasks like the Heptagon test - it failed, so the quantized versions will too. But for non-coding tasks like writing and summarizing, it's solid.
  • Similar to DeepSeek, we studied Llama 4s architecture, then selectively quantized layers to 1.78-bit, 4-bit etc. which vastly outperforms basic versions with minimal compute. You can Read our full Guide on How To Run it locally and more examples here: 
  • E.g. if you have a RTX 3090 (24GB VRAM), running Llama-4-Scout will give you at least 20 tokens/second. Optimal requirements for Scout: sum of your RAM+VRAM = 60GB+ (this will be pretty fast). 60GB RAM with no VRAM will give you ~5 tokens/s

Happy running and let me know if you have any questions! :)

1216
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/anultravioletaurora on 2025-04-08 16:48:45+00:00.


Hey friends! Violet here again 😊

So admittedly the last post was a bit of a misfire - the TestFlight link was unavailable from the start, and intermittent after that. Not to mention an Android version had yet to be released 😮‍💨

Hence the .5 - I’m here today to address both of those! 🤘

ICYMI - our TestFlight is alive and amplified! ✈️ We’ve fixed the link availability issues, and you can join via this link 😊

Thanks to work done by some other talented developers, I’m also ecstatic to share that Jellify is available for Android! 🤖 It’ll have to be sideloaded for now, but now I can look into getting it published via storefronts. Google Play and FDroid are what we’ll be targeting 🏬

Android and iOS app files can be found under each release of Jellify 🪼

Finally, I would just like to say I’m incredibly blessed to be part of such a cool community. Y’all have been so incredibly supportive of this project, and I can’t thank y’all enough for the warm reception 💜 If you’ve found bugs or have a feature you’d like to see, you can open an issue on the GitHub page 👍

By the numbers, our Discord server is at 60+ members, we’re sitting at nearly 400 ⭐️ s on GitHub, and we’re at 5 different contributors. I’ve also received 4 sponsorships and a Patreon member. This is all more than I ever thought would happen, and I’m so grateful for the support! If you’re interested in supporting the project, you can do so here 🙏

If this project excites you, come join us! 🤩 We’d love to have more developers and designers coming along with us on this journey 🪼 You can reach out to us on Discord 👋

TL;DR: TestFlight is live, Android versions are available, and the project is lowkey kinda popping off 🤘

Happy listening!

Vi 💜

1217
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/rafaelleru on 2025-04-08 14:50:36+00:00.

1218
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/VivaPitagoras on 2025-04-08 14:49:07+00:00.


I work with a lot of docs (Word, Libreoffice Writer,..). Once I finish with them I export them as pdf and put them in specific folders for other people to check.

I would like to know of there is some type of CI/CD (git-like) but for docs, that will create the pdfs and move them automatically once I am finished.

Thanks in advance.

1219
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Daniel31X13 on 2025-04-08 13:11:04+00:00.


Hello everybody, Daniel here!

Today, we're excited to announce the release of Linkwarden 2.10! 🥳 This update brings significant improvements and new features to enhance your experience.

For those who are new to Linkwarden, it's basically a tool for preserving and organizing webpages, articles, and documents in one place. You can also share your resources with others, create public collections, and collaborate with your team. Linkwarden is available as a Cloud subscription or you can self-host it on your own server.

This release brings a range of updates to make your bookmarking and archiving experience even smoother. Let’s take a look:

What’s new:

⚡️ Text Highlighting

You can now highlight text in your saved articles while in the readable view! Whether you’re studying, researching, or just storing interesting articles, you’ll be able to quickly locate the key ideas and insights you saved.

🔍 Search Is Now Much More Capable

Our search engine got a big boost! Not only is it faster, but you can now use advanced search operators like title:, url:, tag:, before:, after: to really narrow down your results. To see all the available operators, check out the advanced search page in the documentation.

For example, to find links tagged “ai tools” before 2020 that aren’t in the “unorganized” collection, you can use the following search query:

tag:"ai tools" before:2020-01-01 !collection:unorganized

This feature makes it easier than ever to locate the links you need, especially if you have a large number of saved links.

🏷️ Tag-Based Preservation

You can now decide how different tags affect the preservation of links. For example, you can set up a tag to automatically preserve links when they are saved, or you can choose to skip preservation for certain tags. This gives you more control over how your links are archived and preserved.

👾 Use External Providers for AI Tagging

Previously, Linkwarden offered automated tagging through a local LLM (via Ollama). Now, you can also choose OpenAI, Anthropic, or other external AI providers. This is especially useful if you’re running Linkwarden on lower-end servers to offload the AI tasks to a remote service.

🚀 Enhanced AI Tagging

We’ve improved the AI tagging feature to make it even more effective. You can now tag existing links using AI, not just new ones. On top of that, you can also auto-categorize links to existing tags based on the content of each link.

⚙️ Worker Management (Admin Only)

For admins, Linkwarden 2.10 makes it easier to manage the archiving process. Clear old preservations or re-archive any failed ones whenever you need to, helping you keep your setup tidy and up to date.

✅ And more...

There are also a bunch of smaller improvements and fixes in this release to keep everything running smoothly.

Full Changelog:

Want to skip the technical setup?

If you’d rather skip server setup and maintenance, our Cloud Plan takes care of everything for you. It’s a great way to access all of Linkwarden’s features—plus future updates—without the technical overhead.

We hope you enjoy these new enhancements, and as always, we'd like to express our sincere thanks to all of our supporters and contributors. Your feedback and contributions have been invaluable in shaping Linkwarden into what it is today. 🚀

Also a special shout-out to Isaac, who's been a key contributor across multiple releases. He's currently open to work, so if you're looking for someone who’s sharp, collaborative, and genuinely passionate about open source, definitely consider reaching out to him!

1220
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Mean_Preparation_364 on 2025-04-08 12:01:30+00:00.


Hi community :)

I love creating pictures and video on socials using things like ChatGPT and Mid-journey and convert it to video on Replicate and Fal.

But I realized it's super time consuming 😅

So I created a AgentHeroes, a repository to train models, generate pictures, video and schedule it on social media.

Not sure if it's something anybody needs so happy for feedback.

Of course a star would be awesome too 💕

Here is what you can do:

  • Connect different services like Fal, Replicate, ChatGPT, Runway, etc.
  • Train images based on models you upload or using models that create characters.
  • Generate images from all the models or use the trained model.
  • Generate video from the generated image
  • Schedule it on social media (currently I added only X, but it's modular)
  • Build agents that can be used with an API or scheduler (soon MCP):
    • Check reddit posts
    • Generate a character based on that post
    • Make it a video
    • Schedule it on social media

Everything is fully open-source AGPL-3 :)

Some notes:

Backend is fully custom, no AI was used but the frontend is fully vibe code haha, it took me two weeks to develop it instead of of a few months.

There is a full-working docker so you can easily deploy the project.

Future Feature:

  • Connect ComfyUI workflow
  • Use local LLMs
  • Add MCPs
  • Add more models
  • Add more social medias to schedule to

And of course, let me know what else is missing :)

1221
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/steveiliop56 on 2025-04-08 06:57:25+00:00.


Hello everyone,

Tinyauth just reached 1000 stars! This is an amazing achievement I never thought I would reach. Thank you everyone for mentioning and supporting tinyauth. I am planning to release soon with some new cool features.

What is tinyauth?

For anyone wondering, tinyauth is a simple and lightweight alternative to apps like authentik and authelia. I was frustrated with the complexity of these apps so I created my own which is completely stateless, requires only one container (the app itself) and it can be configured entirely with environment variables. Additionally it has support for all the features you would expect like access controls, two factor authentication and of course, support for Google, GitHub, Tailscale and any OAuth provider you would like to use to effortlessly add an extra layer of security to your apps. Tinyauth also supports all of your favorite proxies like Traefik, Nginx and Caddy with minimal configuration.

Check it out

Tinyauth is fully open source and available under the GPL-V3 license on GitHub. There is also a website available here.

Again thank you everyone for your support!

1222
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/TheNick0fTime on 2025-04-08 04:09:47+00:00.


The majority of solutions I've seen for managing updates for Docker containers are either fully automated (using Watchtower with latest tags for automatic version updates) or fully manual (using something like WUD or diun to send notifications, to then manually update). The former leaves too many things to go wrong (breaking changes, bad updates, etc) and the latter is a bit too inconvenient for me to reliably stay on top of.

After some research, trial, and error, I successfully built a pipeline for managing my updates that I am satisfied with. The setup is quite complicated at first, but the end result achieves the following:

  • Docker compose files are safely stored and versioned in Gitea.
  • Updates are automatically searched for every night using Renovate.
  • Email notifications are sent for any found updates.
  • Applying updates is as easy as clicking a button.
  • Docker containers are automatically redeployed once an update has been applied via Komodo.

Figuring this all out was not the easiest thing I have done, so I decided to write a guide about how to do it all, start to finish. Enjoy!

1223
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ValerioLyndon on 2025-04-08 03:45:59+00:00.


I've been looking around and I can't seem to find a good option for self-hosting a tag-based image software. Specifically, I am trying to replace Hydrus Network because sharing my Hydrus collection across devices is basically impossible and it's extremely sluggish. There are loads of camera/photo applications, but not really any booru-style ones...

So far I have found szurubooru, shimmie2, and Danbooru. Danbooru is out due to it's licence and while I haven't looked into it it seems like overkill for a single user. szurubooru is more promising and seems solidly built, but is again more focused on being an online service than a personal one. Primarily it does not appear to have any filesystem-based import feature? I only see the web upload which is a no-go as I need to convert a Hydrus database and a terabyte of files to whatever new system I use. shimmie2 appears to have the same lack of integration with local files.

If I were to distill what I was looking for, it would be a multi-media browsing software that has high quality import options from my local filesystem and has support for arbitrary tags, tag namespaces, tag implications (parents / siblings). Does that exist?

1224
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ExceptionOccurred on 2025-04-08 03:02:10+00:00.


Hey everyone! Great news! I've added many charting features you requested to SparkyBudget!

You'll find them under the 'Historical Trend' sheet. Here's a quick rundown:

  • Salary Trend: See how your income is changing over time.
  • Income vs. Budget vs. Expense: Visualize how well you're sticking to your budget each month.
  • Expense Trend: Helps you visualize your spending habits over time and identify areas where you might be able to cut back.
  • Top Categories by Month: Quickly see where your money is going each month.

I'll be adding more visualizations in the coming days. I want to make sure I'm focusing on the most helpful features for you.

I'm currently considering these next steps:

  • Email Alerts: Get notified when you're over budget, receive weekly expense summaries, and more.
  • Goal Setting & Saving Targets: Set financial goals and track your progress.
  • Multi-Currency Support: Track budgets and expenses in different currencies.
  • AI-Powered Chat: Chat with your budget & expenses to get personalized insights.
  • Partner Collaboration: Shared and private accounts for couples to budget together.

So, I'd love to hear from you: Which of these features would be most helpful for you right now, and what other key challenges do you face in budgeting that you'd like to see solved with data visualization?

You can check out the app and even contribute here:

1225
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/lanedirt_tech on 2025-04-07 14:38:21+00:00.

view more: ‹ prev next ›