this post was submitted on 24 Sep 2025
149 points (95.2% liked)

Selfhosted

53785 readers
394 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 3) 50 comments
sorted by: hot top controversial new old
[–] bizarroland@lemmy.world 2 points 2 months ago (1 children)

I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.

[–] kiol@lemmy.world 2 points 2 months ago (1 children)

So Truenas itself is running your containers?

[–] bizarroland@lemmy.world 2 points 2 months ago

Yeah, the more recent versions basically have a form of Docker as part of its setup.

I believe it's now running on Debian instead of free BSD, which probably simplified the containers set up.

[–] tychosmoose@lemmy.world 2 points 2 months ago

I'm doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don't know.

The migration path I'm working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I'll probably get it rolling on Debian 13 soon.

[–] 9tr6gyp3@lemmy.world 2 points 2 months ago (1 children)

I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn't provide the packages I want to use.

Just went with arch as the host OS and firejail or lxc any processes i want contained.

[–] towerful@programming.dev 2 points 2 months ago (2 children)

I've never installed a package on proxmox.
I've BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).

What would you install on proxmox?!

load more comments (2 replies)
[–] 51dusty@lemmy.world 2 points 2 months ago (1 children)

my two bare metal servers are the file server and music server. I have other services in a pi cluster.

file server because I can't think of why I would need to use a container.

the music software is proprietary and requires additional complications to get it to work properly...or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

IMO the only reliable method for containers is a cluster because if you're running several containers on a device and it fails you've lost several services.

[–] kiol@lemmy.world 1 points 2 months ago (1 children)

Cool, care to share more specifics on your Pi Cluster

[–] 51dusty@lemmy.world 1 points 2 months ago

I followed one of the many guides for installing proxmox on Rpis. 3node, 4gb rpi4s

I use the cluster for lighter services like Trilium, FreshRss, secondary DNS, a jumpbox... and something else I forget. I'm going to try immich and see how it performs.

my recent goto for cheap($200-300) servers are Debian + old Intel Macbook pros. I have two Minecraft bedrock servers on MBPs... one an i5, the other an i7.

I also use a Lenovo laptop to host some industrial control software for work.

[–] Jerry@feddit.online 2 points 2 months ago (1 children)

Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can't do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.

[–] kiol@lemmy.world 2 points 2 months ago (1 children)

Why could you not have that Mastodon setup in containers? Sounds normal afaik

load more comments (1 replies)
[–] kossa@feddit.org 2 points 2 months ago

Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

[–] OnfireNFS@lemmy.world 2 points 2 months ago

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer

Probably doesn't work for everyone but it works for me

[–] Routhinator@startrek.website 2 points 2 months ago

I'm running Kube on baremetal.

[–] frezik@lemmy.blahaj.zone 2 points 2 months ago (1 children)

My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

OPNsense is its own box because I prefer to separate it for security reasons.

Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

[–] HiTekRedNek@lemmy.world 1 points 2 months ago

My reasons for keeping OpnSense on bare metal mirror yours. But additionally I don't want my network to take a crap because my proxmox box goes down.

I constantly am tweaking that machine...

[–] TheMightyCat@ani.social 2 points 2 months ago (1 children)

I'm selfhosting Forgejo and i don't really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?

[–] turmoil@feddit.org 1 points 2 months ago

If you don't exceed a single deployment, it's fine.

If however in the future, you want to add additional services to your host, let's say an alerting or status system, it's a lot easier to declare everything in a single place and then attach a reverse proxy to manage networking multiple services on one host.

[–] eleitl@lemmy.zip 2 points 2 months ago

Obviously, you host your own hypervisor on own or rented bare metal.

[–] pineapplelover@lemmy.dbzer0.com 2 points 2 months ago

All I have is Minecraft and a discord bot so I don't think it justifies vms

[–] bhamlin@lemmy.world 2 points 2 months ago

It depends on the service and the desired level of it stack.

I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.

At work, I run services in docker in VMs because the benefits far outweigh the complexity.

[–] otacon239@lemmy.world 1 points 2 months ago (1 children)

After many failures, I eventually landed on OMV + Docker. It has a plugin that puts the Docker management into a web UI and for the few simple services I need, it’s very straightforward to maintain. I don’t cloud host because I want complete control of my data and I keep an automatic incremental backup alongside a physically disconnected one that I manually update.

[–] kiol@lemmy.world 2 points 2 months ago (1 children)

Cool, how are you managing your disks? Are you overall happy with OMV?

[–] otacon239@lemmy.world 1 points 2 months ago (1 children)

Very happy with OMV. It’s not crazy customizable, so if you have something specialized, you might run into quirks trying to stick to the Web UI, but it’s just Debian under the hood, so it’s pretty manageable. 4x1TB drives RAID 5 for media/critical data, OS drive, and a Service data drive (databases, etc). Then an external 4TB for the incremental and another external 4TB for the disconnected backup.

[–] kiol@lemmy.world 2 points 2 months ago (1 children)

Awesome, thanks. Upgrade process has been seamless?

[–] otacon239@lemmy.world 2 points 2 months ago

Haven’t had to do a full OS upgrade yet, but standard packages can be updated and installed right in the web UI as well.

[–] StrawberryPigtails@lemmy.sdf.org 1 points 2 months ago

Depends on the application. My NAS is bare metal. That box does exactly one thing and one thing only, and it's something that is trivial to setup and maintain.

Nextcloud is running in docker (AIO image) on bare metal (Proxmox OS) to balance performance with ease of maintenance. Backups go to the NAS.

Everything else is running on in a VM which makes backups and restores simpler for me.

load more comments
view more: ‹ prev next ›