Self-Hosted Alternatives to Popular Services

224 readers
2 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1976
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/modelop on 2025-02-03 13:34:23+00:00.

1977
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Kazumadesu76 on 2025-02-03 11:28:55+00:00.


I’ve gotten a ton of unwanted traffic to my Jellyfin website and have had some brute force attacks, and I need to come up with some cloudflare rules to make them stop.

I currently have 3 rules:

  1. Allow my IP address
  2. Block all countries except my own
  3. Block all types of verified bot categories and HTTP versions 1, 1.1, 1.2, 2.

That last one seems to mess with my Jellyfin configuration a bit, because I can’t get Jellyseerr to submit requests to Prowlarr. It also prevents the Jellyfin app from working on my tv.

I’d like to see what rules you guys use so that I can improve my own and stop getting so many attack attempts.

1978
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/stringlesskite on 2025-02-03 10:37:44+00:00.

1979
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/No-Raccoon-1993 on 2025-02-03 15:03:59+00:00.


I stumbled onto this subreddit looking for tips on running a basic Plex server, and holy shit, you people are insane. Instead of finding normal humans, I find complete psychos debating ZFS configurations like they're discussing fine wine. "Ah yes, this RAIDZ2 has subtle notes of data integrity.” You are all a bunch of sick vitamin D deficient freaks.

I actually work with and manage multiple Kubernetes, mission-critical infrastructure that actually matters. I spend my entire day working with containerised applications, and what do I find when I load up Reddit? Ansible playbook writing maniacs trying to automate their light switches. You are all a bunch of sick freaks who probably dream in YAML and wake up in cold sweats wondering if you forgot to enable that cron job.

The worst part is how you enable each other. "Hey guys, just finished my basic home automation setup", and then you post a system diagram that looks like the blueprint for a nuclear reactor. Fourteen Docker containers just to manage a suite of 'internet of things connected shitware. You celebrate each others descent into madness with vomit inducing comments like "Nice setup! Have you considered adding Prometheus monitoring?" You are all a bunch of sick freaks, you make me ill.

And the money you guys must spaff away... you've somehow convinced yourself that spending thousands on enterprise server equipment from 2012 is justified as it was originally 10x the cost. And then you refer to it as “your little setup". "Oh this? Just my Dual mirrored RAID 10 arrays with triple redundant UPS and backup diesel generator that kicks in if the power flickers for more than 3 milliseconds. You know, for my Linux ISO collection" Meanwhile your electricity meter spins so fast it could probably generate its own electricity. You are all a bunch of sick freaks, and you need help.

I take solace in imagining what your home lives are like, I laugh as I imagine your families, having to sit through dinner listening to you explain why running Pi-hole with Unbound is superior to forwarding to Cloudflare. I bet your kids start crying when you mention DNS-over-HTTPS. Your wife just stares at you now, especially since you've replaced all your family photos with Grafana dashboards.

I imagine you boiling over when when the women you made vows to asks "why can’t we just go back to using iCloud" when your precious self-hosted photo library goes down during your third Photoprism upgrade this week. They completely ignore your ‘impressive’ (97% lol) uptime statistics and offsite backups. You are all a bunch of sick freaks, and your loved ones are losing hope.

No, you don't need Kubernetes or 10gig network switches or 7u rack. You don't need any of these increasingly abstract layers of complexity that exist only to solve the problems created by your previous solutions. Your simple file server didn't need containers, those containers didn't need orchestration, that orchestration didn't need a service mesh, Yet here you are, staring at 10,000 lines of YAML, wondering if maybe just one more helm chart would finally make it all perfect. But I know you'll keep adding more, because you're all just a bunch of sick freaks.

1980
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/makhno on 2025-02-03 00:21:37+00:00.


Looking for the most basic self hosted inventory / stock management tool for a small business.

Snipe-IT looks way too complex, and looks more for asset management.

Odoo also looks way, way too complex.

I basically just need "Here are the items I have in stock, at these prices, with these characteristics" and that's it.

1981
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kameleon25 on 2025-02-03 01:41:56+00:00.


For the past few years I have had a single VM running docker and was using that to run my ARR stack (radarr, sonarr, tdarr, sabnzbd, ombi, tautuilli, and plex each as their own docker containers but on the same host so easier to communicate). It ran fine but I lost that VM. So I am rethinking everything. I have Proxmox so I can use LXC containers but I've read some people have issues with their permissions. I use Synology for my storage and could run the docker straight on there. How do you run your ARR stack?

1982
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/sandropuppo on 2025-02-02 22:53:36+00:00.


We just open-sourced Lume, https://github.com/trycua/lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.

What Lume brings to the table:

  • Run native macOS VMs in 1 command, using Apple Virtualization.Frameworklume run macos-sequoia-vanilla:latest
  • Prebuilt images on ghcr.io/trycua (macOS, Ubuntu on ARM, BSD)
  • API server to manage VMs programmatically (POST /lume/vms)
  • A python SDK on github.com/trycua/pylume

Run prebuilt macOS images in just 1 step

lume run macos-sequoia-vanilla:latest 

Install from Homebrew

brew tap trycua/lume brew install lume 

You can also download the lume.pkg.tar.gz archive from the latest release and install the package manually.

Local API Server:

lume exposes a local HTTP API server that listens on http://localhost:3000/lume, enabling automated management of VMs.

lume serve 

For detailed API documentation, please refer to API Reference.

HN devs - would love raw feedback on the CLI and whether this solves your VM on Apple Silicon pain points. What would make you replace Lima, UTM or Tart with this?

Repo: github.com/trycua/lume

Python SDK: github.com/trycua/pylume

1983
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/PutridLikeness on 2025-02-02 19:04:03+00:00.


I've been diving into the world of self-hosted identity providers, specifically authentik, aiming to streamline authentication across my various services using OpenID Connect (OIDC). While the promise of a unified SSO experience is enticing, the journey has been anything but smooth.

Challenges I've Encountered:

  1. Complex Configuration: Setting up authentik with OIDC involves navigating a labyrinth of settings. Defining providers, configuring applications, and setting up flows and stages can be overwhelming. Despite following the official documentation, I often find myself second-guessing if I've missed a crucial step.
  2. Sparse Documentation: The lack of clear, comprehensive documentation has been a huge pain point. I often feel like I’m piecing things together from incomplete sources, which leads to more confusion. Troubleshooting feels like a crapshoot, with a lot of reliance on Google and ChatGPT for any potential solutions.
  3. Debugging Difficulties: When things go wrong, pinpointing the exact issue is a nightmare. Is it a misconfiguration in authentik? An incompatibility with the service? Network issues? The lack of clear error messages doesn't help either.
  4. Maintenance Overhead: Managing and updating authentik alongside other services adds another layer of complexity. Ensuring that all components remain compatible after updates is a constant concern.

Seeking Advice:

  • Success Stories: Has anyone successfully integrated authentik with a suite of self-hosted services using OIDC? I'd love to hear about your setup and any pitfalls you avoided.
  • Alternative Solutions: Are there other self-hosted identity providers that might offer a more straightforward integration process? I've read about Keycloak and Authelia, but I'm unsure if they'd present the same challenges.
  • Best Practices: Any general advice on managing authentication across multiple self-hosted services? Tips on configuration, maintenance, or troubleshooting would be greatly appreciated.

At this point, I'm feeling a bit disheartened. The vision of a seamless SSO experience is what keeps me going, but the path to get there is fraught with obstacles. Any guidance or shared experiences would be invaluable.

Thanks in advance!

1984
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/PaNeK4547 on 2025-02-02 20:14:07+00:00.


Hello I recently started to homelab which has been refreshing and a bit addictive to say the least.

I was interested in messing with Whoogle but it was updated on their GitHub the project is in jeopardy of being broken and ended due to java search issues

I have been trying to de-google / get away from corporations for my tech and daily needs.

The second popular one was SearXNG i was looking at but is there any other projects i should consider?

And is there any drawbacks to hosting locally?

1985
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/2nistechworld on 2025-02-02 21:43:32+00:00.

1986
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/PracticalFig5702 on 2025-02-02 12:08:54+00:00.


Hey Selfhosters,

i just wrote a small Beginners Guide for Beszel Monitoring Tool.

Link-List

| Service | Link | |


|


| | Owners Website | | | Github | | | Docker Hub | | | | | | AeonEros Beginnersguide | |

I hope you guys Enjoy my Work!

Im here to help for any Questions and i am open for recommandations / changes.

Screenshots

Beszel Dashboard

Beszel Statistics

Want to Support me? - Buy me a Coffee

1987
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/SaltyWheel7112 on 2025-02-02 10:57:31+00:00.


I want to make a 24/7 music radio station using the music i have on my server.

I want to embed the stream on website as well

1988
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/StudentWithNoMaster on 2025-02-02 08:13:41+00:00.


Story time: So last night, I realized that my Nextcloud was unable to connect to internet for 'app updates'. I was surprised because my internet was working.

My setup is basically a Pihole for DNS resolver on a Pi Zero 2W and Proxmox server with LXCs and docker containers. I use custom DNS entries to have local access with traefik and Pihole.

So I started with testing, Proxmox was reaching the Internet, but that ONE LXC was not. So I rebooted the LXC and then the system. Now, even proxmox was not connecting to the internet. Internally everything was working, just the DNS issue. So I changed the DNS to cloudflare for Proxmox and it worked. Then moved on to test Pihole, it was fine. Then tried to ping Pi from Proxmox, it just wont! Then tried to ping Proxmox from Pi, it worked! And funnily, now Proxmox has internet, but not the LXC. Then I pinged the LXC from Pi, and now the LXC has internet and everything is fine. Just to be sure, I rebooted the entire proxmox once again. Now the entire proxmox won't work.

So after alot of back and forth, I rebooted my ROUTER. And now everybody is happy. All issues solved... It took four hours to realize that it was a 2 min issue.

1989
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Ieremies on 2025-02-01 22:39:53+00:00.


TL;DR: I am a programming teacher for High School. I need a solution to host coding activities in python (jupyter notebooks). It needs to save their work on the server (so they can change computers) and allow me to see the work of all students. I have tried Coder and JupyterHub, and the latter has served me well last year, but I am looking to upgrade.

The not-so-long story

I am a high school coding teacher (at private school in Brazil, classes start at the beginning of the year), and I teach introduction to programming and computer science. At my school, the notebooks teachers use during the morning to help with classes are the ones we use so they can do some coding at my class. Because of that, they change computers (sometimes, some classrooms are being used so we have to grab a notebook from another one) and I can not guarantee that the notebook they used today will be available next week.

Also, I have only one hour with each class each week, so I tend to send a good amount of activities so they can study at home. Most of my students don't know much about how to use a computer. For my surprise, it seems this new generation of parents does not own PCs at home, and those that do, only use for work, so kids can't do my homework on those. Almost 40% of my students do their activities on their smartphones.

For the first two years, I used Repl.it's Classroom feature, which allowed me to assign activities, test, and collect them. It was (almost) perfect... but then, they dropped this feature xD. Last year, I searched for some options, and tried Coder, but couldn't find a way to assign activities (remember, they don't know git to "pull" any new activity) nor to read their work.

I ultimately landed on JupyterHub, using The Littlest JupyterHub. It worked good enough, each student is a user, I can move activities to their home folder with a simply cp and read it when I need to grade. There are two main pet peeves:

  1. The UI: although is a classic JupyterLab UI, when we move to working on video-game project (at the second semester), I have to work locally and use VSCode. Personally, I don't use VSCode, but I have to admit it is (and probably will be for years to come) the most popular coding interface, so it would be nice to familiarize them with it. That was the biggest advantage of Coder.
  2. The lack of "nice-to-have" Quality-of-Life: I couldn't manage to get completion and coding formatting working. Those are things that help them write more effortless, so it is a bummer to not have it.

Some ideas I had

If I could see and change the work of each user in Coder, it would be the perfect platform. If any of you know how to do so, please enlighten me.

Perhaps a server with each student as a user. They can connect using the Remote features of VSCode, which probably would require them to install only VSCode and not python nor Jupyter, but I am not sure how they would do to access it via their mobile phone.

Yeah... so, any ideas?

1990
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Puzzled_Estimate_596 on 2025-02-01 07:49:29+00:00.


I am seeing a trend where western governments are becoming for dictatorial and control freaks. We will see lot of bans in many services in coming years.

AI, messaging will face more bans first. Then it will proceed to wiki and knowledge videos. Don't expect a ban on entertainment videos and shorts, because governments would want to keep its citizens busy doing stupid things.

In such a case, will there be raid in homes, where suspected hosting takes place.

1991
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/agent_kater on 2025-02-01 20:30:47+00:00.


I hope this isn't off-topic here but I'm active in this community anyway and people here usually know about this stuff, so I wanted to give it a shot.

I'm looking for a small Linux distro without desktop environment for VMs, not containers. I just tried the "minimal" Debian ISO and selected nothing but the SSH server and it still used more than 2 GB! What I'm looking for should be more in the < 100 MB range. It should still have the ability to install common tools like curl, ifconfig, python, this kind of thing, from a package.

Alpine almost fits the bill, but the musl thing frequently causes issues when building for example Node.js libraries that use C code.

1992
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/thehelpfulidiot on 2025-02-01 18:38:05+00:00.


Hey everyone! I’m thrilled to announce the latest release of Ghostboard, version 3.1.0! 🚀

This update was inspired by an awesome suggestion from u/jack3308 here on Reddit:

Well, yes, yes you can! Ghostboard now has full markdown support, and it turned out even better than I expected! 😎 Here's what’s new:

✨ What's New in 3.1.0?

  • Markdown Support (WYSIWYG): You can switch between a markdown view and a regular text editor at any time.
  • Seamless Sync: Whether you're typing in plain text or markdown, syncing works flawlessly across multiple devices and boards.
  • No Feature Sacrifice: Everything you loved about Ghostboard before is still here!

🌟 Recap of Core Features:

  • Real-time text synchronization between multiple computers and boards.
  • New markdown support for better formatting and note-taking.
  • Flexible UI with light and dark modes.

Check out the latest release on GitHub: Ghostboard v3.1.0

As you can probably guess by the number of posts I have made over the past week, this project is under active development, so please let me know about any issues and I will try and address them ASAP!

1993
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/enchant97 on 2025-02-01 09:30:23+00:00.


Now Self Hosted, is a monthly-ish article where I take a look at and review a selection of apps which can be self hosted. This issue covers: Wallos, tududi and Beszel.

Come over and read it here: enchantedcode.co.uk/blog/now-self-hosted-8.

If you have any suggestions for guides related to self-hosting, please send me your ideas!

Also checkout the new subreddit r/enchantedcode to see related posts in the future.

1994
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/FeineSahne6Zylinder on 2025-02-01 14:13:07+00:00.


I went down the rabbithole of selfhosting around 20 months ago. I'm running around 20 services and my experience has been good. The usual stuff, Pihole, Arr, Uptime Kuma, Postgres, Nginx, etc and a bunch of my own things.

I recently gave Paperless-NGX a shot and I like it. I'm now feeling the urge to scan & index all my old documents and get rid of the annoying physical paper. But that raises the question, how much do I trust my setup? Am I comfortable digitizing bank and tax statements? Or should I keep it at the level of "invoices from years ago"?

I think I'm doing all the right things like Raid1, regular scrubbing, backups to NAS, backups to S3, quarterly backups to an external HDD, encryption of everything, daily monitoring with Grafana, daily or weekly container and OS upgrades, ARR suite on a different physical machine in a different VLAN, no pirated software, nothing exposed to internet (apart from my Tailscale derp server that's running on cloud) and so on. I have a CS degree and a large part of my day job is convincing F500 companies why it's cool if they put their crown jewel data into my employer's SaaS in the cloud, so I think I know a thing or two about the space. But still somehow I'm terrified that at some point I make a mistake and my data is either gone or hacked and exposed. I somehow still feel more at piece when my data is in Onedrive and my photos in iCloud and not Immich lol.

Curious how people in this sub feel about homelab safety and durability and where you draw the line around what data goes into your selfhosted stack.

1995
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Pretty_Platypus1524 on 2025-02-01 03:35:11+00:00.

1996
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/tgp1994 on 2025-01-31 16:30:42+00:00.


Duplicati has been receiving a lot of dev attention for some time now. I know people have a bad taste from older versions of Duplicati corrupting data, so hopefully this may be a sign that things are improving.

1997
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/deano_southafrican on 2025-01-31 10:28:37+00:00.


Hey y'all! So everyone has been super into dumb software - you might recognise DumbDrop, DumbPad, DumbBudget, amongst others, brought to us by the crew over at DumbWare.io! I love the simplicity of these applications and how easy they are to set up, expose through a reverse proxy of your choice and just generally integrate into the lives of techies and non-techies alike. Those of us with SO's that we'd like to use our selfhosted services a little more can appreciate the fact that when software is nuanced or complex in any way, there is just no hope of getting family members to use it.

So, I put to you all the question, how can these tools be used, have you come up with creative ways to use any of these tools? The images are relatively small and easily configured so spinning up multiple instances for different use cases is easy!

Let's share some ideas!

DumbDrop:

  • (From the creator) Can be mapped to your Paperless consume directory so family members can easily upload documents that will be consumed by Paperless and handled accordingly.

  • Similar to the above, have an easy upload front-end mapped to store files to be consumed by receipt-wrangler or similar, easy for SO or anyone to upload receipts.

  • Photo uploader for guests at a social event. Map the data volume to a place you'd like to store photos, spin up your instance for the event and use QR codes for your guests to access the UI & upload their photos/videos.

DumbBudget:

  • Have a single savings account where you can transfer money but keep track of multiple savings goals. You'll need a separate instance for each savings goal but you could transfer $500 into the account and "allocate" $200 towards "Vacation Fund" and $300 to "HomeLab upgrades", thus using one savings account for multiple savings goal. ** It might be nice if the developer could add an env var for the Budget name which would display in the UI, helping to keep track of multiple instances...

  • Keep track of kids "pocket money" or allowances. Have an instance for each child and add their allowances each month and keep track of their "expenses" or spending. Great for them to see their account and have access to that data long-term in order to learn about their savings and spending habits. ** Again, might be nice to add custom names for each instance which show on the UI.

1998
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/yoracale on 2025-01-31 18:03:16+00:00.


Hey guys! We previously wrote that you can run R1 locally but many of you were asking how. Our guide was a bit technical, so we at Unsloth collabed with Open WebUI (a lovely chat UI interface) to create this beginner-friendly, step-by-step guide for running the full DeepSeek-R1 Dynamic 1.58-bit model locally.

This guide is summarized so I highly recommend you read the full guide (with pics) here:

  • You don't need a GPU to run this model but it will make it faster especially when you have at least 24GB of VRAM.
  • Try to have a sum of RAM + VRAM = 80GB+ to get decent tokens/s

To Run DeepSeek-R1:

1. Install Llama.cpp

  • Download prebuilt binaries or build from source following this guide.

2. Download the Model (1.58-bit, 131GB) from Unsloth

  • Get the model from Hugging Face.
  • Use Python to download it programmatically:
from huggingface_hub import snapshot_download snapshot_download(     repo_id="unsloth/DeepSeek-R1-GGUF",     local_dir="DeepSeek-R1-GGUF",     allow_patterns=["*UD-IQ1_S*"] ) 

  • Once the download completes, you’ll find the model files in a directory structure like this:
DeepSeek-R1-GGUF/ ├── DeepSeek-R1-UD-IQ1_S/ │   ├── DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf │   ├── DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf │   ├── DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf

  • Ensure you know the path where the files are stored.

3. Install and Run Open WebUI

  • This is how Open WebUI looks like running R1

  • If you don’t already have it installed, no worries! It’s a simple setup. Just follow the Open WebUI docs here:

  • Once installed, start the application - we’ll connect it in a later step to interact with the DeepSeek-R1 model.

4. Start the Model Server with Llama.cpp

Now that the model is downloaded, the next step is to run it using Llama.cpp’s server mode.

🛠️Before You Begin:

  1. Locate the llama-server Binary
  2. If you built Llama.cpp from source, the llama-server executable is located in:llama.cpp/build/bin Navigate to this directory using:cd [path-to-llama-cpp]/llama.cpp/build/bin Replace [path-to-llama-cpp] with your actual Llama.cpp directory. For example:cd ~/Documents/workspace/llama.cpp/build/bin
  3. Point to Your Model Folder
  4. Use the full path to the downloaded GGUF files.When starting the server, specify the first part of the split GGUF files (e.g., DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf).

🚀Start the Server

Run the following command:

./llama-server \     --model /[your-directory]/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \     --port 10000 \     --ctx-size 1024 \     --n-gpu-layers 40 

Example (If Your Model is in /Users/tim/Documents/workspace):

./llama-server \     --model /Users/tim/Documents/workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \     --port 10000 \     --ctx-size 1024 \     --n-gpu-layers 40 

✅ Once running, the server will be available at:

http://127.0.0.1:10000/

🖥️ Llama.cpp Server Running

After running the command, you should see a message confirming the server is active and listening on port 10000.

Step 5: Connect Llama.cpp to Open WebUI

  1. Open Admin Settings in Open WebUI.
  2. Go to Connections > OpenAI Connections.
  3. Add the following details:
  4. URL → Key → none

Adding Connection in Open WebUI

If you have any questions please let us know and also - any suggestions are also welcome! Happy running folks! :)

1999
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Zestyclose_Car1088 on 2025-01-31 14:25:17+00:00.


Mine is Pi-hole at 7 months...

2000
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Freika on 2025-01-31 16:27:48+00:00.


Hello there, good people of r/selfhosted!

First month of 2025 is behind us and I'm happy to share changes happened in Dawarich, your favorite self-hosted location history visualizer, during January.

First, the big important thing is that the maintainers of Photon, our reverse geocoding provider of choice, reached out to us, Dawarich users, and kindly asked us to self-host our own Photon instances, as Dawarich became too popular for a free Photon instance to handle and created a significant load. Fortunately, I already have an instruction on how to spin up your own Photon instance on your server (warning, it takes ~120gb for the whole planet), and for those who don't want to bother with self-hosting a reverse geocoding instance, there is a tier on Patreon that offers access to a private photon instance hosted by yours truly.

Second and related, Dawarich now supports Geoapify as a reverse geocoding provider. It's also aimed to reduce the load on public Photon instance.

Moving on!

Some breaking changes were introduced this month, please make sure you have read the release notes before updating.

The fancy routes were introduced in mid-January! Love this feature. Just have a look at the screenshot, it colors your route based on speed in each segment. Enablable in the map settings (top left corner of the map).

Look at how awesome they are!

One big improvement I'm especially proud of is switching points and polylines mode rendering on the map to canvas. This single change made working with map with dozens of thousands of points so much smoother than before, I still can't believe it. My personal record was having 117k worth of points on the map and it wasn't lagging! Oh, my. The other thing is that this number of points is still loading pretty slow, but I'm aiming to fix it in February.

As many of you requested, you can now drag-n-drop point on the map if your client app glitched and recorded it 100 meters away from your actual route. Just enable the Points layer on the map and drag-n-drop your point. Neat.

Among other things, I had a chance to work on the importing process. My own Records.json file, provided by Google Takeout, weights ~178mb, consists of ~670k points and previously importing it into Dawarich took ~2 hours. After an update this month it takes ~5 minutes, which I find pretty impressive. Importing process update for all other file formats supported by Dawarich (which are GPX, GeoJSON, two more file formats from Google and Owntracks' .rec) are on their way, hopefully, in February.

There is also a change in development process, asked by members of community. Previously, the docker image freikin/dawarich:latest was created on each and every release, and now prereleases will be built as freikin/dawarich:rc from the dev branch (where rc stands for release candidate), and after a day or a few the dev branch will be merged to master and a stable release will be built in freikin/dawarich:latest. This change will allow those who are willing to stay on the bleeding edge to test the most recent changes, and the rest of you, well, will get more stable version of Dawarich just a bit later than them.

Oh, and Dawarich was featured in a real German magazine, can you believe that?

Even in its printed version! I instantly ordered this issue on Amazon.

And they even had a podcast issue on it on Youtube. I'm positively flattered.


Did I miss something? Hopefully not.

I'm starting a new job next Monday, which means there will probably be less Dawarich updates than in previous 3 months, but bear with me — the best stuff is coming! Still got plenty ideas and fixed to implement.

Thanks for your interest!

P.S. Oh, and while you're here, would you mind answering a few questions about Dawarich? That would be just great. Thank you! Here's the form! or if you really don't want to use Google ;)

view more: ‹ prev next ›