this post was submitted on 18 May 2025
1 points (100.0% liked)

Self-Hosted Alternatives to Popular Services

222 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 2 years ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ArgyllAtheist on 2025-05-17 22:51:31+00:00.


My home server is an Ubuntu 24.04 box with a bunch of docker containers (23 of them, the usual suspects - frigate, home assistant, calibre, homepage....)

I keep all of my docker compose files in the /opt/ folder, and have a seperate ZFS pool /media-pool/ for data.

I use

/opt/frigate

/opt/calibre-web

/opt/plexamp

and so on - in each folder is a docker compose YAML that has a ./config:/config mapped volume and network config.

I have been doing large scale data moves, shunting a few TB of files around and got careless.

I typed everyone's favourite DMF command rm -r * /mnt/thefolderiactuallymeanttodelete. Doh!

after the usual "hmm, that delete took a little long to run", I realised what I had done. I know the files are gone, and my backups have been failing for lack of space (hence the data copies). I will take my punishment from the God of fat fingers and no back up...

*but* - all of my containers are still running.

The ones which have sqlite dbs in the config folder are toast, obviously, but all of the general config stuff is there. one of the healthy containers is Portainer (I use it to view/access logs and consoles easily, not create things)

I am new enough to docker to not know how to get the best out of this.

I am pulling the /opt folders from my last good back up - six days ago. So... what can I do to make best use of the docker containers all still running? gathering info/files/configs to save me recovery time?

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here