Self-Hosted Alternatives to Popular Services

224 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1951
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Developer_Akash on 2025-02-05 12:18:59+00:00.


Hey r/selfhosted!

After a short break, I'm back with another blog post and this time I'm sharing my experience with setting up Authelia for SSO authentication in my homelab.

Authelia is a powerful authentication and authorization server that provides secure Single Sign-On (SSO) for all your self-hosted services. Perfect for adding an extra layer of security to your homelab.

Why I wanted to add SSO to my homelab?

No specific reason other than just to try it out and see how it works to be honest. Most of the services in my homelab are not exposed to the internet directly and only accessible via Tailscale, but I still wanted to explore this option.

Why I chose Authelia over other solutions like Keycloak or Authentik?

I tried reading about the features and what is the overall sentiment around setting up SSO and majorly these three platforms were in the spotlight, I picked Authelia to get started first (plus it's easier to setup since most configurations are simple YAML files which I can put into my existing Ansible setup and version control it.)

Overall, I'm happy with the setup so far and soon plan to explore other platforms and compare the features.

Do you have any experience with SSO or have any suggestions for me? I'd love to hear from you. Also mention your favorite SSO solution that you've used and why you chose it.


Authelia — Self-hosted Single Sign-On (SSO) for your homelab services

1952
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MicahDowling on 2025-02-05 14:42:51+00:00.


Hi all, I’m one of the creators of ChartDB.

A few months ago, I introduced ChartDB to this community and received an amazing response - tons of positive feedback and feature requests. Thank you for the incredible support!

Recap: For those new to ChartDB, it simplifies database design and visualization, similar to tools like DBeaver, dbdiagram, and DrawSQL, but is completely open-source and self-hosted.

Key features

  • Instant Schema Import - Import your database schema with just one query.
  • AI-Powered DDL Export - Generate scripts for easy database migration.
  • Broad Databases - Works with PostgreSQL, MySQL, SQLite, MSSQL, ClickHouse, and more.
  • Customizable ER Diagrams - Visualize your database structure as needed.
  • Open-Source & Self-Hostable - Free, flexible, and transparent.

What’s New in v1.7.0 (2025-02-03)

🚀 New Features

  • CockroachDB Support - Now fully supports CockroachDB.
  • ClickHouse Enhancements - Improved ClickHouse integration.
  • DBML Editor - Added a built-in DBML editor in the side panel.
  • Import DBML - Now you can import DBML files directly into ChartDB.
  • Drag & Drop Table Ordering - Easily reorder tables in the side panel.
  • Mini Map Toggle - Added a toggle option for mini-map visibility.

🛠 Bug Fixes & Improvements

  • Docker Build - OPENAI_API_KEY is now optional when using Docker.
  • Canvas Editing - You can now edit table names directly on the canvas.
  • Dark Mode Fixes - Improved UI for the empty state in dark mode.
  • Power User Shortcuts - Added new keyboard shortcuts and key bindings.
  • Performance Boost - Optimized bundle size for faster loading.

What’s Next?

  • AI - Tables Relationships finder - AI-powered tool to detect table relationships.
  • CLI/API Diagram Updates - Option to update diagrams via CLI, API, or a JSON input file.
  • Git Integration for Versioning - Manage and track diagram changes with Git version control.
  • More database support & DBML improvements.
  • Enhanced collaboration & sharing features.
  • Additional performance optimizations.

We’re building ChartDB hand-in-hand with this community and contributors. Your feedback drives our progress, and we’d love to hear more!

Thank you to everybody who contributed! ❤️

1953
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/stealthanthrax on 2025-02-05 14:40:32+00:00.


Hey Everyone 👋

A month ago, I launchedAmurex here, and the response was insane. Thank you all for the support, feedback, and, of course, the critiques..

Back then, authentication was still a mess if you wanted to self-host, and, well... we kinda deserved the roasting. But today, I come bearing good news: Amurex is now 100% self-hostable, including authentication AND the full web platform.

What does that mean? You can now run your AI meeting copilot entirely on your own infra, access it through the Chrome extension, and have it handle transcripts, summaries, and all the other AI magic without relying on our servers.

Would love to hear your thoughts : does it work for your setup? Any pain points? What would make self-hosting smoother? Give it a spin and let me know!

GitHub Repo -

Website Link -

1954
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Pretty-Ad4969 on 2025-02-05 09:31:03+00:00.


Hi all

Having found this page I can only assume there are others like me that are fed up with Google and others.

I'm a bit late to the party having read many of these posts but I'm now at a stage where I want to stop paying Google and Apple for cloud storage.

I love my google photos as my iPhone syncs my photos to it but since they've taken away my unlimited storage I want to get away. My requirements are a photos app where my partner and I can share photos, a way to sync my photos from my iPhone so if I ever lose my phone, my photos are all backed up automatically. Cloud storage for my software, design files, videos etc.

I hear Nextcloud is great for storage and Immich for my Google photos alternative. Is this still the case? Is there anything else that is on offer?

I've read that people have used a Raspberry PI to power their server. Is this actually possible? I don't want to spend too much at the moment, I only want about 6-10 TB of data to store the above plus my MacBook backups etc (would I still use Time Machine for this?).

Any positive advice is really appreciated.

Thanks

1955
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/agersant on 2025-02-05 09:15:15+00:00.

1956
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Brancliff on 2025-02-05 09:07:25+00:00.

1957
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/drjay3108 on 2025-02-05 08:43:47+00:00.


For a long time i was just lurking and from time to time commenting in this sub and now i finally made my first publish for this sub.

Someone asked how to get the saved Posts into hoarder and i asked me that myself too few months ago and wrote two scripts for that.

So here are the links for that:

RedditSavedPostExtractor

HoarderConverter

Edit: Links added Cause formatting didn‘t work on mobile

1958
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Montaro666 on 2025-02-05 03:38:10+00:00.


BEFORE you get your knickers in a twist. I KNOW there’s ansible for this, I know it’s a great tool for managing infrastructure. This is not trying to be that. It runs a command against a list of ssh servers and returns the result. That’s it. It’s free, if you have a use for it, take it, it’s yours. If you don’t, then don’t use it 🙂

I got sick of having to log into each of my servers to do things like updates etc. I searched around and found a few SSH tools that weren't too bad but they didn't quite work the way I wanted them to, so I made my own if anyone is interested. I've only tested it on MacOS and Linux so would like to hear if it works OK on Windows (I'm worried about os.path.expanduser on Windows).

1959
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/goudarziha on 2025-02-05 00:37:19+00:00.


I came across this originally at r/DataHoarder wanted to share here.

Archive Team is a collective of volunteer digital archivists led by Jason Scott (u/textfiles), who holds the job title of Free Range Archivist and Software Curator at the Internet Archive.

Archive Team has a special relationship with the Internet Archive and is able to upload captures of web pages to the Wayback Machine.

Currently, Archive Team is running a US Government project focused on webpages belonging to the U.S. federal government.

I created a repo for docker-compose examples I came across

For technical support, go to the #warrior channel on Hackint's IRC network.Archive Team also has a subreddit at r/Archiveteam

To ask questions about the US Government project, go to #UncleSamsArchive on Hackint's IRC network.

Please note that using IRC reveals your IP address to everyone else on the IRC server.

You can somewhat (but not fully) mitigate this by getting a cloak on the Hackint network by following the instructions here:

To use IRC, you can use the web chat here:

You can also download one of these IRC clients:

1960
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Hot_rooster5486 on 2025-02-04 18:39:23+00:00.


Hey, I have a kinda stupid question. What am I potentially missing out on by not running the server 24/7? I only turn it on when I need to access my network storage or an app. The power costs in Europe are quite high, so I want to save money. I've heard that HDD drives that aren't spinning tend to fail more quickly. What are your thoughts on this? (Also, I don't have any smart devices in my house, so I don't need a home assistant. Should i just use local computer and external drives?)

1961
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Pretty_Platypus1524 on 2025-02-04 22:09:39+00:00.

1962
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/PracticalFig5702 on 2025-02-04 14:30:37+00:00.


Hey Selfhosters,

i just wrote a small Beginners Guide for setting up Authelia for Traefik.

Traefik + Authelia

Link-List

| Service | Link | |


|


| | Owners Website | | | Github | | | Docker Hub | | | AeonEros Beginnersguide Authelia | | | AeonEros Beginnersguide Traefik | |

I hope you guys Enjoy my Work!

Im here to help for any Questions and i am open for recommandations / changes.

The Traefik-Guide is not 100% Finished yet. So if you need anything or got Questions just write a Comment.

I just Added OpenIDConnect! Thats why i Post it as an Update here :)

Screenshots

Authelia Website

Authelia as a Authentication Middleware

Want to Support me? - Buy me a Coffee

1963
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/sphiinx on 2025-02-04 00:17:03+00:00.


Don’t get me wrong—I absolutely love self-hosting. If something can be self-hosted and makes sense, I’ll run it on my home server without hesitation.

But when it comes to LLMs, I just don’t get it.

Why would anyone self-host models like Ollama, Qwen, or others when OpenAI, Google, and Anthropic offer models that are exponentially more powerful?

I get the usual arguments: privacy, customization, control over your data—all valid points. But let’s be real:

  • Running a local model requires serious GPU and RAM resources just to get inferior results compared to cloud-based options.
  • Unless you have major infrastructure, you’re nowhere near the model sizes these big companies can run.

So what’s the use case? When is self-hosting actually better than just using an existing provider?

Am I missing something big here?

I want to be convinced. Change my mind.

1964
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/thehelpfulidiot on 2025-02-04 18:02:47+00:00.


After several updates and valuable feedback, I'm excited to announce that Ghostboard is now mostly finished! Ghostboard is a lightweight, self-hosted solution for real-time synchronized text sharing, perfect for quickly sharing text across multiple devices via a web interface or command line. Here's a summary of its main features:


Key Features

Server

  • Real-time synchronized text field across all connected clients.
  • Dynamically supports multiple boards based on unique URLs (e.g., /test, /notes).
  • Dark Mode toggle and improved styling with responsive transitions.
  • New corner tabs that expand to display project info and a GitHub link.
  • Robust WebSocket connection handling with error messages and reconnect options.
  • Markdown Support: Switch between plain text and markdown modes seamlessly.
  • Simplified Docker Setup: Includes an NGINX proxy in the Docker image, allowing all traffic over port 80 for easy deployment.

Client

  • Command-line tool to retrieve or update text on any board.
  • Supports both WebSocket IP and domain name connections.
  • Docker image available for quick client deployment.

Dockerized Deployment

  • Both the server and client are available as prebuilt Docker images, making setup quick and painless.

Recent Updates

  • v3.3.0: Visual improvements (better scrolling behavior, consistent drag-and-drop outlines).
  • v3.2.0: Docker setup simplified with bundled NGINX—only port 80 needs to be opened.
  • v3.1.0: Full markdown editing support, inspired by a Reddit suggestion!
  • v3.0.0: Nonintrusive corner tabs for project info and GitHub link, plus enhanced dark/light mode toggle.

Thanks to everyone who gave feedback and feature suggestions! Ghostboard has come a long way, and I'm proud of what it’s become. You can check out the project here: GitHub - jon6fingrs/ghostboard.

Feel free to try it out, report any issues, or suggest improvements! 😊

1965
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/bombaglad on 2025-02-04 14:33:43+00:00.

1966
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/brufdev on 2025-02-04 13:01:46+00:00.


Many Notes is a markdown note-taking app designed for simplicity!

For this version, my main focus was on creating a test suite, although I ended up adding new features and fixing some bugs as well. With over 200 assertions between architecture, feature, and unit tests, Many Notes currently has 100% type coverage and over 98% code coverage. This allowed me to catch and fix a few bugs, resolve performance issues, and improve the overall codebase.

Here’s what changed:

  • Many Notes now supports authentication via Authelia and Zitadel.
  • Fixed Keycloak OAuth provider.
  • Read OAuth configurations from cache.
  • Fixed vault export when zip file could not be created.
  • Fixed Markdown editor issue when parsing selections in the last line.
  • Fixed rendering issue of file menu when in preview mode.
  • Fixed issue when opening a file after closing a file in preview mode.
  • The tree view panel now opens by default, except on mobile devices.
  • Vaults are now sorted by their last opening date.
  • Improved the code quality with architecture, feature and unit tests.
  • GitHub now runs the test suite automatically to ensure high quality.
  • Improved features, installation, customization and upgrading sections.

Read the upgrading guide if you are upgrading from any version below 0.4. Read the installation and customization section to install.

Here are a few things to keep in mind:

  • This app is currently in beta, so please be aware that you may encounter some issues.
  • If you need assistance, please open an issue on GitHub.

Tell me what you think and if you like it, consider leaving a star on GitHub.

1967
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/No_Paramedic_4881 on 2025-02-04 11:58:48+00:00.


Hey r/selfhosted! Remember the M1 Mac Mini side project post from a couple months ago? It got hammered by traffic and somehow survived. I’ve since made a bunch of improvements—like actually adding monitoring and caching—so here’s a quick rundown of what went right, what almost went disastrously wrong, and how I'm still self-hosting it all without breaking the bank. I’ll do my best to respond in an AMA style to any questions you may have (but responses might be a bit delayed).

Here's the prior r/selfhosted post for reference:

What I Learned the Hard Way

The “Lucky” Performance

During the initial wave of traffic, the server stayed up mostly because the app was still small and required minimal CPU cycles. In hindsight, there was no caching in place, it was only running on a single CPU core, and I got by on pure luck. Once I realized how close it came to failing under a heavier load, I focused on performance fixes and 3rd party API protection measures.

Avoiding Surprise API Bills

The number of new visitors nearly pushed me past the free tier limits of some third-party services I was using. I was very close to blowing through the free tier on the Google Maps API, so I added authentication gates around costly API's and made those calls optional. Turns out free tiers can get expensive fast when an app unexpectedly goes viral. Until I was able to add authentication, I was really worried about scenarios like some random TikTok influencer sharing the app and getting served a multi-thousand dollar API bill from Google 😅.

Flying Blind With No Monitoring

My "monitoring" at that time was tailing nginx logs. I had no real-time view of how the server was handling traffic. No basic analytics, very thin logging—just crossing my fingers and hoping it wouldn’t die. When I previously shared about he app here, I had literally just finished the proof-of-concept and didnt expect much traffic to hit it for months. I've since changed that with a self-hosted monitoring stack that shows me resource usage, logs, and traffic patterns all in one place.

Environment Overhaul

I rebuilt a ton of things about the application to better scale. If you're curious, here's a high level overview of how everything works, complete with schematics and plenty of GIFs:

MacOS to Linux

The M1 Mac Mini is now running Linux natively, which freed up more system resources (nearly 2x'd the available RAM) and alleviated overhead from macOS abstractions. Docker containers build and run faster. It’s still the same hardware, but it feels like a new machine and has a lot more head room to play around with. The additional resources that were freed up allowed me to standup a more complete monitoring stack, and deploy more instances of the app within the M1 to fully leverage all CPU cores.

Zero Trust Tunnels & Better Security

I had been exposing the server using CloudFlare dynamic DNS and a basic reverse proxy. It worked, but it also made me a target for port scanners and malicious visitors outside of the protections of Cloudflare. Now the server is exposed via a zero trust tunnel plus I setup the free-tier Cloudflare WAF (web application firewall), which cut down on junk traffic by around 95%.

Performance Benchmarks

Then

Before all these optimizations, I had no idea what the server could handle. My best guess was around 400 QPS based on some very basic load testing, but I’m not sure how close I got to that during the actual viral spike due to the lack of monitoring infrastructure.

Now

After switching to Linux, improving caching, and scaling out frontends/backends, I can comfortably reach >1700 QPS in K6 load tests. That’s a huge jump, especially on a single M1 box. Caching, container optimizations, horizontal scaling to leverage all available CPU cores, and a leaner environment all helped.

Pitfalls & Challenges

Lack of Observability

Without metrics, logs, or alerts, I kept hoping the server wouldn’t explode. Now I have Grafana for dashboards, Prometheus for metrics, Loki for logs, and a bunch of alerts that help me stay on top of traffic spikes and suspicious activity.

DNS + Cloudflare

Dynamic DNS was convenient to set up but quickly became a pain when random bots discovered my IP. Closing that hole with a zero trust tunnel and WAF rules drastically cut malicious scans.

Future Plans

Side Project, Not a Full Company

I’ve realized the business model here isn’t very strong—this started out as a side project for fun and I don't anticipate that changing. TL;DR is the critical mass of localized users needed to try and sell anything to a business would be pretty hard to achieve, especially for a hyper niche app, without significant marketing and a lot of luck. I'll have a write up about this on some future post, but also that topic isn't all that related to what r/selfhosted is for, so I'll refrain from going into those weeds here. I’m keeping it online because it’s extremely cheap to run given it's self-hosted and I enjoy tinkering.

Slowly Building New Features

Major changes to the app are on hold while I focus on other projects. But I do plan to keep refining performance and documentation as a fun learning exercise.

AMA


I’m happy to answer anything about self-hosting on Apple Silicon, performance optimizations, monitoring stacks, or other related selfhosted topics. My replies might take a day or so, but I’ll do my best to be thorough, helpful, and answer all questions that I am able to. Thanks again for all the interest in my goofy selfhosted side project, and all the help/advice that was given during the last reddit-post experiment. Fire away with any questions, and I’ll get back to you as soon as I can!

1968
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/arthicho on 2025-02-04 11:38:35+00:00.


Good day! I wanted to introduce Meelo. It's an alternative for Plex/Jellyfin tailored for music collectors. It currently supports:

  • Having multiple versions of an album
  • Song duplicates
  • Song versions (original, remix, instrumental)
  • Album and song typing (studio, remixes, live, etc.)
  • Get an album's B-Sides and an artist's rare songs
  • Feature/Duet detection
  • Metadata parsed from file path and/or embedded metadata
  • Get extra metadata from external providers (Lyrics, ratings, description, etc.)

As of today, there is no mobile app. Only a web client is available. The next features on the roadmap are: gapless playback, labels, scrobbling and synced lyrics.

It's free and open-source! Check it out on GitHub: github.com/Arthi-chaud/Meelo

I am also looking for other features ideas. What other features would make Meelo great for music collectors? I've been thinking of adding support for extra media like digital booklets

1969
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/386U0Kh24i1cx89qpFB1 on 2025-02-04 03:17:25+00:00.


I'm not yet past hosting a few things like Pi hole, Plex, and some other basic services. So many guides just give you a docker compose file to customize for your own environment and instruct to you pull the latest image from wherever. But how do I trust that the software I'm running is not malicious or won't turn malicious? Obviously big name stuff like Pihole, Plex, Nginx etc are pretty easy to trust. But for less popular software, how do I trust that someone isn't going to send a malicious update? How careful do I need to be? There are so many sources and forks of things and sometimes it's hard to know whether the source you are using is official or a fork. It's easy to spend lots of time trouble shooting port issues and forget to look at the image source and vet it. It's also easy to imaging someone justifing using a fork of something that is tweaked for fit their needs instead of tinkering with the source that they cant get to work for whatever reason.

Like I think I'm comfortable enough creating a unique user with limited access and using that UID and GID to limit permissions. Careful about only mounting necessary volumes etc. But even those volumes might have lots of data I care about in some way shape or form. I'm just not an expert here, and like many newbies, run software on my NAS which would be pretty difficult to lose. Yes yes backups blah blah. Maybe beyond say a encryption attack someone is worried about their private data being harvested quietly? No shortage of bad things that can happen ...

In theory a rouge image shouldn't have access to much if I'm careful, but I'm curious if there's anything I should watch for? Most of the guides barely gloss over security. Both docker and Linux are known for contributing to a secure ecosystem. I just worry that it's for people who know what they are doing and not your average schmo editing a copy paste compose script.

1970
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Dramatic_Ad5442 on 2025-02-03 14:22:43+00:00.


Hello all Noah here, welcome to the February Update.

For those of you that are new, welcome! Receipt Wrangler is a self-hosted, ai powered app that makes managing receipts easy. Receipt Wrangler is capable of scanning your receipts from desktop uploads, mobile app scans, or via email, or entering manually. Users can itemize, categorize, and split them amongst users in the app. Check out for more information.

This month, I am happy to release version 6.0 of Receipt Wrangler. Let's get into what 6.0 brings.

Development Highlights:

Asynq Implementation: Asynq is a task library for Go. This has been implemented in Receipt Wrangler and greatly improves receipt processing in Receipt Wrangler. This is most noticeable when quick scanning receipts, instead of waiting for all of the receipts to finish processing, they are now queued and will be processed when there is an available worker to pick it up.

Additionally, when things go wrong during processing then they will be retried up to 3 times before failing. Once failed they can be manually reran via the new activity widget, which I'll talk about in the next section.

This is the same behavior for email receipt processing. All of this brings greater reliability and robustness around the receipt processing processing, allowing users to upload stuff very quickly, and worry about it later.

Activity Widget: The activity widget is a new dashboard widget which allows users to see activity within the group. The types of events currently displayed are: Quick Scans, Email Uploads, Receipt Upload and Receipt Updated. If the activity failed, and the user has editor permissions in the group, then it may be re-ran.

Breaking Changes: Receipt Wrangler now requires Redis to run. A migration guide is in the v6.0 documentation.

If you are using the mobile app, please update to the latest version for support for v6.0 of the server.

Coming up in February: After completing all of the development for this month, I had realized that I need more time to deliver major features like the features in the roadmap, so that they are stable, complete and polished.

With that being said, I will be adding another month onto all roadmap items. This is so that I can work on the major features, as well as fix bugs, implement minor unrelated changes and simply test things more so I can catch bugs before they are deployed.

The good news is I expect each feature to be usable according to the original timeline as there will at least two releases for each feature, one for the initial implementation and two for any polish, enhancement.

That said, this month there will be:

V6.0 Polish: Some small polish items, like adding filtering to activities.

Tech Updates: Both the API and Desktop apps need to upgrade major versions of important packages such as auth packages on backend, and Angular on the frontend as well as upgrading the design to Material 3. These are more technical tasks, but ensure that the technology used within the project stays up to date.

Misc: The remainder of the time will be spent on somewhat miscellaneous enhancements, bug fixes and some of the groundwork for next month's custom fields implementation.

Additional Notes:

Arm Builds: At the moment, Receipt Wrangler's Arm builds are failing upon deployment. I have some ideas on how to fix these issues, but v6.0 is currently not out for Arm devices (raspberry pis, apple silicon based mac devices, ect). This should be fixed sometime soon.

First Contributor!: In January we had our first contributor to Receipt Wrangler, big thanks and shoutout to SeppNel on Github for the contribution!

Pikapods: Drop an upvote on to get Receipt Wrangler on PikaPods, we'd love to see it as a one click install!

Thanks for reading!

Noah

1971
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/NickLinneyDev on 2025-02-04 00:45:08+00:00.


HP is no longer honoring warranty on their Solid State Drives.

I know consumer grade drives aren't recommended for any server equipment, but for those of us on a budget, we have to make choices. I bought a handful of HP drives on sale, and now have had this experience.

An hour and half of transfers to tell me they have outsourced their drive support because "The drives had too many issues."

Calling the phone numbers they provide to Multipoint just ring forever.

I intend to email them just so I can follow up with a BBB complaint if they don't correct the issue. It just burns me that they are flagrantly in breach of their contract.

I know HP isn't like an A-list company to a lot of techies these days, but I'm still surprised at this level of brazen enshittification.

1972
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/allaboutduncanp on 2025-02-03 22:32:05+00:00.


Hey all, I've posted once before about this little utility I built, but I've added a few new features as well as updated the UI pretty significantly, so I'm calling this the v1.0 release of Comic Library Utilities.

This is a Docker deployable, self-hosted set of tools to let you remotely administer, edit and update your self-hosted comic (CBZ / CBR) libraries.

Here's the link to the full repo, feature list and deploy instructions.

Major additions from the previous announcement are listed below.

Enhancements

  • Missing Issue Check: Run against a directory and generate a list of missing issues. Works on the assumption that all issues are (#01, 01, 001) through (#20, 20, 020). Missing issues are saved to a unique ID text file with the link being served to you when the process is complete.
  • Configure Key Words to ignore during "Missing Issue Check"
  • Folder Monitor Renaming: Monitor a "downloads" folder for new files, rename and move them to a "processed" folder. Renames everything using a {Series Name} {Issue Number} ({Year}) pattern.
  • Updated Selection UI: Card based UI allows you to run multiple functions without re-selecting via the menu. This also prevents accidentally running the previous function on a newly entered directory / file path.
  • Updated File / Directory UI: Single File or Directory options are enabled / shown based on the whether a file or directory is entered for processing.
  • Enhanced Logging and Status Messages: Added better logging and status message formatting.

Notes

Missing Issue Check is not a "smart" feature and simply assumes each folder should have files starting with (#01, 01, 001) and the "last" file is the last alpha-numeric file in the folder. It will generate a list of anything missing from that ordering. This is not a "smart" feature and simply assumes each folder should have files starting with (#01, 01, 001) and the "last" file is the last alpha-numeric file in the folder. run on an entire publisher folder.

Docker Deploy

Docker images are updated for image: allaboutduncan/comic-utils-web:latest

  • Re-pull and Update to deploy
  • If you wish to use the new features - ensure you have the additional volume & environment variables added to your docker compose.
1973
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/InsideYork on 2025-02-03 15:04:48+00:00.


I have a bunch of spare Android phones, what could I use them for? I need to figure out power management too. I have a few ideas:

Can I use the sensors like for home assistant? I was thinking of using it possibly as a video camera and/or audio transcriber. I don't know what it needs to run whisper but it be cool even if it streamed the audio to my server.

Syncing a reader application for my place in books.

I remember someone ran a server off these too but mine don't have that much ram.

Anyone have good uses they have from their phones?

1974
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/modelop on 2025-02-03 13:34:23+00:00.

1975
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Kazumadesu76 on 2025-02-03 11:28:55+00:00.


I’ve gotten a ton of unwanted traffic to my Jellyfin website and have had some brute force attacks, and I need to come up with some cloudflare rules to make them stop.

I currently have 3 rules:

  1. Allow my IP address
  2. Block all countries except my own
  3. Block all types of verified bot categories and HTTP versions 1, 1.1, 1.2, 2.

That last one seems to mess with my Jellyfin configuration a bit, because I can’t get Jellyseerr to submit requests to Prowlarr. It also prevents the Jellyfin app from working on my tv.

I’d like to see what rules you guys use so that I can improve my own and stop getting so many attack attempts.

view more: ‹ prev next ›