Self-Hosted Alternatives to Popular Services

224 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
1876
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/wkup-wolf on 2025-02-11 16:08:35+00:00.


I'm have some technical skills and I think I can do it. However I want to know the security implications. Some people strongly advised me against it. They said I should just use Bitwarden.

So I want to ask if someone here with a cybersecurity background (or has any idea) can give me his/her opinion.

1877
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/TechNomadMK on 2025-02-11 15:17:43+00:00.


Hey everyone,

I'm currently looking for must-have self-hosted apps that make your daily life easier. I love diving into new projects and constantly improving my homelab.

I've only been into self-hosting for about 14 days, so I'm still struggling with some things, but I'm eager to learn and improve.

Here are the services I’m currently running:

  • AdGuard Home
  • Nginx Proxy Manager
  • PDF Stirling
  • Portier (2x)
  • Smokeping
  • Uptime Kuma
  • Watchtower
  • Paperless NGX

Which self-hosted apps do you consider essential? What makes your life easier or is just plain fun?

Looking forward to your recommendations and insights!

1878
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/K0ka on 2025-02-11 12:27:57+00:00.


Hi r/selfhosted ,

My goal is to make the installation process of self-hosted apps easier, ideally in one click. This is what I did my pet project for -   

It can install/uninstall such packages as Jellyfin, Immich, Plex and some others in one click.

Many packages are supplied as a single docker container but require other stuff for a full setup: a domain, databases, volumes, ssl certificates. Each and every thing needs to be configured according to your existing infrastructure. Provided docs and hints are far from being unified. Some of the packages require docker, some docker compose. Some packages will install their own db, some request your existing db credentials. Some require an ssl certificate, some want to install traefik which will request the cert on its own.

Long story short, the idea to unify package requirements crossed my mind. I tried to reach the goal using modern IaC instruments, such as terraform, ansible, puppet. But they are tailored to describe what exactly needs to be done, or the precise state to be reached. They don’t have tools to specify that I just need a container, and it doesn’t matter if this container is running on a local docker daemon, or a kubernetes cluster, or AWS ECR, or anything else. 

This is why I created my own app which is more of a proof of concept right now. I use an abstract description format, so that packages can be installed on any system. This format is based on the notion of “contracts” that the infrastructure has to complete to make the package work. The contract can be completed in any way. For example, the HttpEndpoint contract can be completed either by exposing a port to the outside network or via a reverse proxy setup. I implemented only traefik as a reverse proxy, but other services such as caddy or nginx can also be supported in the future.

You can check the package format at   

My next plans might be

  • Increase the amount of packages (there are only 20 of them right now)
  • Implement more features for current packages. For example, integrate arr-stack between each other and torrent client
  • Add more contract types. For example, mysql and postgresql databases.
  • Add more ways to fulfill contracts. For example, use caddy or nginx instead of traefik. Use podman or kubernetes instead of docker.
  • Write tests and documentation
  • Try to auto-detect the running infrastructure and configure packages accordingly.
  • And many more

The question that bothers me is if it is needed for anyone except me. I do like the idea, but I wouldn't like to implement it solely for myself. Has anyone already done (or is doing) something similar?

Please let me know what you think about it.

1879
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Litlyx on 2025-02-11 09:44:28+00:00.


Hi folks at r/selfhosted,

I wanted to introduce you to our self-hosted analytics tool called Litlyx. I've already made some posts here, but I would love the support of this amazing community of builders to share this with people who might be interested.

We didn’t invent anything new... this isn’t some groundbreaking discovery... but we realized that "modern" analytics solutions are bad. Really bad.

No good UI/UX. They claim to be open-source but impose too many limitations. They say they replace Google Analytics but still import its tracking script... (Yes, we allow users to log in with Google and email, but only because Google has 10B+ accounts.)

So the idea is: we want to bring some fresh air and genuinely try to replace Google Analytics (even if it’s an impossible task). We want to be a modern alternative to Plausible, Matomo, Umami that are old solution that most of the time complicate things to developers.

I’d love for you to check out our repository: Litlyx on Github and share your feedback.

Thanks,

Antonio, CEO at Litlyx

1880
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/GeekIsTheNewSexy on 2025-02-11 08:42:41+00:00.


Hey r/selfhosted and fellow Redditors! 👋

I’m excited to introduce Reddit-Fetch, a Python-based tool I built to fetch, organize, and back up saved posts and comments from Reddit. If you’ve ever wanted a structured way to store and analyze your saved content, this is for you!

🔹 Key Features: ✅ Fetch & Backup: Automatically downloads saved posts and comments. ✅ Delta Fetching: Only retrieves new saved posts, avoiding duplicates. ✅ Token Refreshing: Handles Reddit API authentication seamlessly. ✅ Headless Mode Support: Works on Raspberry Pi, servers, and cloud environments. ✅ Automated Execution: Can be scheduled via cron jobs or task schedulers.

🔧 Setup is simple, and all you need is a Reddit API key! Full installation and usage instructions are available in the GitHub repo: 🔗 GitHub Link:

Would love to hear your thoughts, feedback, and suggestions! Let me know how you'd like to see this tool evolve. 🚀🔥

1881
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/No-Pudding7536 on 2025-02-10 19:04:46+00:00.


Lately, I've been working with multiple databases, and honestly, I’m too lazy to manually write backup scripts and set up cron jobs for each one. So, I built Velld—a simple, self-hosted database backup management tool to automate the process,

  • Currently supports PostgreSQL, MySQL, and MongoDB

  • Automated backup with cron-like scheduling

  • Notification (just in case a backup fails)

dashboard

connection

history

It’s still in early development, but it works! If you're tired of dealing with backup scripts manually, check it out and let me know what you think :)

Would love any feedback or contributions!

1882
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/-ManWhat on 2025-02-11 04:59:06+00:00.

1883
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/PixelHir on 2025-02-11 00:08:03+00:00.


I'm writing this guide/testimony because I deleted my twitter account back in November, sadly though some content is still only available through it and often requires an account to properly browse it. There is an alternative though called Nitter that proxies the requests and displays tweets in proper, clean and non bloated form. This however would require me to replace the domain in the URL each time I opened a Twitter link. So I made a little workaround for my infra and devices to redirect all twitter dot com or x dot com links to a Nitter instance and would like to share my experience, idea and guide here.

This assumes few things:

  • You have your own DNS server. I use Adguard Home for all my devices (default dns over Tailscale + custom profiles for iOS/Mac that enforce DNS over HTTPS and work outside of Tailnet). As long as it can rewrite DNS records it's fine.
  • You have your own trusted CA or ability to make and trust a self signed certificate as we need to sign a HTTPS certificate for twitter domains without owning them. Again, in my case I just have step-ca for that with certificates trusted on my devices (device profiles on apple, manual install on windows) but anything should do.
  • You have a web server. Any can do however I will show in my case how I achieved this with traefik.
  • This will break twitter mobile app obviously and anything relying on its main domains. You won't really be able to access normal Twitter so account management and such is out of the question without switching the DNS rewrite off.
  • I know you can achieve similar effect with browser extensions/apps - my point was network-wide redirection every time everywhere without the need for extras.

With that out of the way I'll describe my steps

  1. Generate your own HTTPS certificate for domains x dot com and twitter dot com or setup your web server software to use ACME endpoint of your CA. Latter is obviously preferable as it will let your web server auto renew the certificate.
  2. Choose your instance! There's a bit of Nitter instances available from which you can choose here. You can also host it yourself if you wish although that's a bit more complicated. For most of the time I used xcancel.com but recently switched to twiiit.com which instead redirects you to any available non-ratelimited instance.
  3. Make a new site configuration. The idea is to make it accept all connections to twitter/X and send a HTTP redirect to Nitter. You can either do permanent redirection or temporary, the former will just make the redirection cached by your browser. Here's my config in traefik. If you're using a different web server it's not hard to make your own. I guess ChatGPT is also a thing today.
  4. After making sure your web server loads the configuration properly, it's time to set your DNS rewrites. Set the twitter dot com and x dot com to point to your web server IP.
  5. It's time to test it! On properly configured device try navigating to any Tweet link. If you've done everything properly it should redirect you to the proper tweet on your chosen nitter instance.

I'm looking forward to hearing what you all think about it, whether you'd improve something or any other feedback that you have:) Personally this has worked flawlessly for me so far and was able to properly access all post links without needing an account anymore.

1884
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/HTTP_404_NotFound on 2025-02-10 16:57:33+00:00.


Just- a simple guide on how to migrate from VirtualBOX to proxmox.


The disk on my gaming PC was filling up this weekend, and I realized I had a few hundred gigs of Virtualbox images sitting on it.

So- decided to migrate them over to proxmox to free up the space, and I documented the process.


The TLDR;

  1. Convert .VDI to .IMG using VBoxManage
  2. Move Image to Proxmox-accessible location.
  3. Convert .IMG to .QCow2 using qemu-image (Step 2 & 3 are interchangeable)
  4. Create blank proxmox VM without disks.
  5. import disk
  6. set boot order, attach disk.
  7. done.

Also, hindsite- you could just copy the .VDI directly to proxmox, and use qemu-img to go straight from .VDI to .qcow2

1885
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/GiveMeARedditUsernam on 2025-02-11 02:42:07+00:00.

1886
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/johnny5w on 2025-02-11 00:16:43+00:00.


Upvote RSS is a self-hosted project I've been working on that generates RSS feeds from social aggregation websites like Reddit, Lemmy, and Hacker News. You can subscribe to subreddits, Lemmy communities, and Hacker News while filtering to only the top posts. It will embed Reddit post media (videos, images, galleries), and you can optionally include parsed article content, AI-generated summaries, top comments, and more. Here are some of the features:

  • Supports subreddits, Hacker News, Lemmy communities, and more to come
  • Configurable filtering to dial in the right number of posts per day in your feed reader
  • Embedded post media: videos, galleries, images
  • Parsers to extract clean content and add featured images
  • AI article summaries
  • Estimated reading time, score, and permalinks to the original post
  • Top comments
  • NSFW filtering/blurring (Reddit only)
  • Custom Reddit domain
  • Light/dark mode for feed previews

Here's the GitHub link if you'd like to give it a spin:

And the preview website (not all options are available here):

1887
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/yousboot on 2025-02-10 19:22:51+00:00.


Notion upset me by the amount of features and apps they're trying to cram into one small space + they're training models on user's private data.

So I decided to create my own; Memory: self-hosted, fast and secure. Doing one thing and doing it well.

It's in pure javascript, python with flask and SQLite. Please let me know if you have any more ideas or remarks. Enjoy !

1888
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kerkerby on 2025-02-10 18:41:48+00:00.


Although I'm figuring out how to deploy this in Coolify.

1889
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/alexeir on 2025-02-10 17:38:23+00:00.


Hello!

My company open-sourced machine translation models for 12 rare languages under MIT license.

You can use them freely with OpenNMT translation framework. Each model is about 110 mb and has an excellent performance, ( about 40000 characters / s on Nvidia RTX 3090 ) Check the manual how to setup them on github.

  • You can test translation quality there:

  • Download models there

1890
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/slowmotionrunner on 2025-02-10 16:52:28+00:00.


SMB (and Samba which I use interchangeably) can be a fickle mistress. Virtually everyone with a home NAS will end up using Samba at some point and tuning it for the best performance can be somewhat of a dark art. This is the story of how I found my performance problems were from the last place I would have thought to look. TLDR at the end.

Here is the context for our story:

  • 2 Windows PCs, one is my primary desktop and the other is headless
  • 1 PiKVM connected to the headless Windows PC
  • 1 new DIY NAS using Samba (technically Proxmox with Samba in an LXC)
  • 1 Gbit ethernet across all devices
  • Tailscale

The initial excitement of setting up my new DIY NAS with its 4, 20 TB drives soon became an exercise in frustration trying to figure out what could be causing transfers to run so slow. I had previously been getting transfer speeds from the desktop Windows machine to the headless Windows machine of ~100 MB/s. This is fairly close to theoretical maximum if you do the conversion of Mbps to MB/s and allow for overhead. With the new NAS having same or better hardware than the headless Windows machine, I expected the same or better performance, but was dismayed to see I was getting only 20-30 MB/s on average.

I'll try to consolidate the numerous dead-ends I went down that took me the better part of my weekend:

  1. Was it the hardware? No, local testing on the NAS showed it working just fine.
  2. Was it the choice of Proxmox/LXC? No, tried different distros, containers, and every combination in-between.
  3. Was it slow for just my Desktop machine? No, because copying from headless Windows to NAS was slow just like Desktop Windows to NAS was; both Windows machines behaved the same.
  4. Was it the Samba configuration? No, I tried endless variations on smb.conf for buffering, socket options, caching, etc.
  5. Was it ports or firewalls? No, no, no...
  6. etc.

I spent most of my time with #4 because I naturally assumed I must have configured the share incorrectly, but, the thing that really sent me down the wrong road was #3. When I tested from either Windows machine to the new NAS, they both had slow transfer speeds and so I incorrectly concluded the problem was with the target NAS, not the source Windows, but that is where I errored. As unlikely as it was, both Windows machines had the same problem.

It was while I was running tests on the connection from Windows to NAS that I got this output in Powershell:

PS> Test-NetConnection -ComputerName 192.168.6.10 -TraceRoute

ComputerName : 192.168.6.10
RemoteAddress : 192.168.6.10
InterfaceAlias : Tailscale
SourceAddress : 100.122.134.77
PingSucceeded : True
PingReplyDetails (RTT) : 22 ms
TraceRoute : 100.117.103.126
 192.168.6.10

I'm embarrassed to say that even when I first saw this output, seeing "Tailscale" gave me pause, but it still took me another day to understand what I was seeing here.

I love Tailscale and have it installed on all of these devices -- except for the new NAS while I'm getting it stood-up. Like a lot of Tailscale users, one of the devices in my LAN is also configured with subnet routing enabled. In this case, the PiKVM has subnet routing enabled and that makes things convenient when not all my devices have Tailscale installed or support Tailscale, but I can still access them remotely like they are on the local network.

Based on my understanding of Tailscale, even though I have subnet routing enabled, I expected items on the same LAN to go over their LAN addresses when using their LAN addresses. Were that true, my Windows Desktop at 192.168.4.235 would go directly to the NAS at 192.168.6.10, but as you can see the connection is taking a detour through Tailscale using the Tailnet IP of the Windows machine 100.122.134.77, to hit the Tailnet IP of the PiKVM subnet router 100.117.103.126, before reaching its destination. In other words, what should have been:

  • 192.168.4.235 -> 192.168.6.10 was actually using,
  • (192.168.4.235) 100.122.134.77 -> 100.117.103.126 -> 192.168.6.10

To test the theory, I temporarily disabled Tailscale on the Windows Desktop and, success! I was getting 110 MB/s! Better even than I was hoping for over my Gb connection! And why was the headless Windows machine also having problems? The same reason. Both my Windows machines were routing LAN request through Tailscale. Running Test-NetConnection again with Tailscale disabled produced this direct connection:

Test-NetConnection -ComputerName 192.168.6.10 -TraceRoute

ComputerName : 192.168.6.10
RemoteAddress : 192.168.6.10
InterfaceAlias : Ethernet 3
SourceAddress : 192.168.4.235
PingSucceeded : True
PingReplyDetails (RTT) : 0 ms
TraceRoute : 192.168.6.10

Now, it is entirely possible I have done something wrong with my Tailscale setup, but I don't think so. I have everything installed pretty vanilla with default settings. Again, this is not the way I was told Tailscale was supposed to work when all the devices are are the same LAN and subnet routing is enabled, but I could have misunderstood.

So how do we fix this?

  • Some of my research suggests that you can pin the SMB connections from Windows to a specific interface adapter using a "constraint" (New-SmbMultichannelConstraint ?) so I could probably do that and pin it to my physical ethernet adapter, but I now considered this a network/Tailscale problem and didn't want to solve it for just SMB.
  • We could monkey with the route tables and/or interface metrics in Windows (Set-NetIPInterface?) to prioritize the physical ethernet adapter first and the virtual Tailscale adapter second to always resolve LAN addresses on the physical adapter, but I don't know how that would affect Tailscale and/or subnet routing.
  • Or, we could not accept Tailscale subnet routing on machines that don't need it.

I went with the last option. When setting up Tailscale on Linux, you have to explicitly accept subnet routes using tailscale up --accept-routes, but on Windows it is the default. That was another thing I was not aware of and had I known, I would have never enabled it. This Windows machine is in my LAN, I don't need Tailscale to worry about subnet routing for me when I'm already in the LAN subnet. In Windows this can be disabled by right-clicking the Tailscale tray icon and disabling Preferences -> Use Tailscale subnets. And that is the simple solution that took me all weekend to figure out: disable subnet routing on the machines that don't need it.

TL;DR: Ensure your SMB connections are going over the traceroute you expect. Tailscale subnet routing is enabled by default in Windows. When you are already in the same LAN exposed by your subnet router, my recommendation would be to not rely on Tailscale to intelligently figure that out and simply disable subnet routing when not needed.

1891
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Mindless-View-3071 on 2025-02-10 10:14:01+00:00.


Hi r/selfhosted,

I have another project idea. However, before I start I want to make sure there is interest in the community and a similar project does not exist yet.

I was thinking about a “compose” website that contains the compose files and basic information of the projects listed in the awesome-selfhosted list. Users can search for projects, browse by categories, etc. In my opinion when finding a new project you want to try out it, is a bit cumbersome to find the corresponding compose file to get started.

Let me know if there is any interest in such a project. Also I have no idea how I would name the project, so give me your best suggestions :). Thanks!

1892
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/JustNathan1_0 on 2025-02-09 22:00:25+00:00.


Just under one year ago, I made a post asking "Do you run Plex, Emby, or Jellyfin" link and it gained a lot of traction. So I decided I was gonna post a similar post again but one year later (with the addition of kodi because why not). This time I'm adding a poll. I am curious as time goes on if people are switching towards one service or another.

As stated in the original post, I have tried emby, plex, and jellyfin. My full opinion today is that Plex is the best for me using my own server. BUT, I think emby is MUCH better for giving out to people. I find when giving the server to people they tend to like for me to "set it up" for them and not have to create a full account and all. Jellyfin would be my third option as of right now as it feels like a less-refined emby in my experience. (Although emby please please please add a watch-together feature PLEASE!)

Let everyone know in the comments what you use and why!

View Poll

1893
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/shingi345 on 2025-02-10 04:08:12+00:00.


My best friend and I are both public school music teachers, and we keep a highly organized Google Drive of repertoire & method books in PDF. We want to get away from Google. We both run Linux and wonder how we may go about this? We are in different states. Some have suggested FTP. We’re young & competent, but we aren’t IT specialists. Any suggestions or guidance would be really helpful, thank you!

1894
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MonsterovichIsBack on 2025-02-09 21:25:38+00:00.

1895
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/nashosted on 2025-02-09 22:11:27+00:00.

1896
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/thetallcanadian on 2025-02-09 19:53:06+00:00.


Hey r/selfhosted,

Long time lurker here and decided I wanted to try and make something for the community! I'm developing méli, a native iOS client for managing recipes on Mealie. This will be completely free and open-source once it is released, but wanted to get some input now from seasoned Mealie users!

What recipe-related features do you prioritize? What would you find most useful right away in méli? I'm primarily focused on recipe management for now. If there's strong interest, I'm open to exploring additional features like shopping lists, meal planning, or household management in the future.

Let me know your thoughts!

Note: méli is a side project and not yet available. Hopefully soon though 🤞

1897
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/m16hty on 2025-02-09 21:29:35+00:00.


Yes, you all heard me, it's all you guys here making selfhosting too much fun and interesting!

Now I joined the club :D

It's not much but it's honest work.

At the moment it's only Raspberry pi 5 (8gb) with 512gb nmve system, and 1tb storage on usb.

Ordered already 2x Pi zero w, to make old printer smart, and other one to make Pi-hole on different system.

Also took Pi 5 (4gb) to make NAS :)

See ya soon with networking questions :D

1898
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/fugixi on 2025-02-09 19:12:49+00:00.


I have been trying to find a self hosted version of an ebook reader, but without any luck.

Finding a reader for simply reading epubs on one device is easy, but the reason for why I am looking for a self hosted alternative is to being able to sync the reading progress between devices, but on my own server.

Wishlist for solution

  • Being able to read all regular ebook formats, especially EPUB and PDF.
  • Cross-platform = both iOS and Android. Web would also be nice, being able to read on a laptop.
  • OPDS or Calibre support so books can be easily downloaded from calibre-web.
  • Sync reading progress between devices.
  • Self-hosted = not reliant on cloud accounts.

Are there any solutions out there that fits all of the above features?

If not all features can be matched, what are your best alternatives and why?

1899
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/studioleaks on 2025-02-09 17:33:27+00:00.


Suwayomi as the server/downloader Komga as the media management Komf as the metadata fetcher Komikku as the client (the only tachi fork that reliably captured two way sync for me)

Currently i created a script with chatgpt that runs weekly, it fetches the most popular mangas in the top 5 sources…then it finds the ones common among at least 4, then it adds it to my suwayomi library and auto download all of it … komga scan it, komf fetches the metadata. And the viewing happens in komikku

The best setup possible currently imo

1900
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Important_Pin_2095 on 2025-02-09 01:48:04+00:00.


Hey everyone! 👋

I’m currently exploring the possibility of completely replacing Microsoft 365 with open-source alternatives. The goal is to get similar functionality (email, files, office, video calls, device management, automation) without subscriptions and closed ecosystems.

📌 What I’m trying to replace: • Azure AD / Entra ID → FreeIPA + Samba AD + Keycloak • Exchange, Outlook → Zimbra Community Edition • OneDrive, SharePoint → Nextcloud + Collabora Online • Teams, Zoom → Jitsi Meet + Nextcloud Talk • Intune, TeamViewer → MeshCentral • Azure Monitor → Zabbix • Power Automate → n8n • Defender XDR → Wazuh • Microsoft Entra MFA → Authelia

🔹 Benefits of This Approach

✅ Full control over data (self-hosted) ✅ No subscriptions or user limitations ✅ Highly customizable ✅ Zero Trust Security (SSO, 2FA, XDR)

🔻 Challenges

❌ Requires setup on VPS or local servers ❌ Maintenance and updates rely on the IT team ❌ Some features may differ from Microsoft 365

💬 Questions for the Community:

  1. Is this realistically feasible for an organization with 50-100 users?
  2. What has been your experience with similar solutions?
  3. What potential pitfalls should I be aware of?
  4. Are there better open-source alternatives I should consider?

I’d love to hear your thoughts and advice!

view more: ‹ prev next ›