this post was submitted on 15 Aug 2025
21 points (92.0% liked)

Selfhosted

50550 readers
383 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Do you guys expose the docker socket to any of your containers or is that a strict no-no? What are your thoughts behind it if you don't? How do you justify this decision from a security standpoint if you do?

I am still fairly new to docker but I like the idea of something like Watchtower. Even though I am not a fan of auto-updates and I probably wouldn't use that feature I still find it interesting to get a notification if some container needs an update. However, it needs to have access to the docker socket to do its work and I read a lot about that and that this is a bad idea which can result in root access on your host filesystem from within a container.

There are probably other containers as well especially in this whole monitoring and maintenance category, that need that privilege, so I wanted to ask how other people handle this situation.

Cheers!

you are viewing a single comment's thread
view the rest of the comments
[–] glizzyguzzler@piefed.blahaj.zone 3 points 1 day ago* (last edited 1 day ago) (1 children)

So I've found that if you use the user: option with a user: UserName it requires the container to have that UserName alsoo inside. If you do it with a UID/GID, it maps the container's default user (likely root 0) to the UID/GID you provide user: 1500:1500. For many containers it just works, for linuxserver (a group that produces containers for stuff) containers I think it biffs it - those are way jacked up. I put the containers that won't play ball in a LXC container (via Incus GUI), or for simple permission fixes I just make a permissions-fixing version of the container (runs as root, but only executes commands I provide) to fill a volume with the data that has the right permissions then load that volume into the container. Luckily jellyfin doesn't need that.

I give jellyfin read-only access (via :ro in the volumes:) to my media stuff because it doesn't need to write to it. I think it's fine if your use-case needs :rw, keep a backup (even if you :ro!).

Here's my docker-compose.yml, I gave jellyfin its own IP with macvlan. It's pretty janky and I'm still working it, but you can have jellyfin use your server's IP by deleting everything after jellyfin-nw: (but keep jellyfin-nw:!) in both the networks: section and services: section. Delete the mac: in the services: section too. In the ports: part that 10.0.1.69 would be the IP of your server (or in this case, what I declare the jellyfin container's IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand).

And of course, I have GPU acceleration working here with some embeded Intel iGPU. Hope this helps!

#
***
NETWORKS
***
 
networks:  
  jellyfin-nw:  
    # In docker, `macvlan` gets similar stuff to 
    driver: macvlan  
    driver_opts:  
        parent: 'br0'  
    #    mode: 'l2'  
    name: 'doc0'  
    ipam:  
        config:  
          - subnet: "10.0.1.0/24"  
            gateway: "10.0.1.1"  

#
***
SERVICES
***
 
services:  
    jellyfin:  
        container_name: jellyfin  
        image: ghcr.io/jellyfin/jellyfin:latest  
        environment:  
          - TZ=America/Los_Angeles  
          - JELLYFIN_PublishedServerUrl=https://jellyfin.guzzlezone.local/  
        ports:  
          - '10.0.1.69:8096:8096/tcp'  
          - '10.0.1.69:7359:7359/udp'  
          - '10.0.1.69:1900:1900/udp'  
        devices:  
          - '/dev/dri/renderD128:/dev/dri/renderD128'  
        #  - '/dev/dri/card0:/dev/dri/card0'  
        volumes:  
          - '/mnt/ssd/jellyfin/config:/config:rw,noexec,nosuid,nodev,Z'  
          - '/mnt/cache/jellyfin/log:/config/log:rw,noexec,nosuid,nodev,Z'  
          - '/mnt/cache/jellyfin/cache:/cache:rw,noexec,nosuid,nodev,Z'  
          - '/mnt/cache/jellyfin/config-cache:/config/cache:rw,noexec,nosuid,nodev,Z'  
          # Media links below  
          - '/mnt/spinner/movies:/data/movies:ro,noexec,nosuid,nodev,z'  
          - '/mnt/spinner/shows:/data/shows:ro,noexec,nosuid,nodev,z'  
          - '/mnt/spinner/music:/data/music:ro,noexec,nosuid,nodev,z'  
        restart: unless-stopped  
        # Security stuff  
        read_only: true  
        tmpfs:  
          - /tmp:uid=2200,gid=2200,rw,noexec,nosuid,nodev  
        # mac address is 02:42 then 10.0.1.69 in hex for each # betwen the .s mapped to the :s in the mac address  
        # its how docker assigns so there will never be a mac address collision  
        mac_address: 02:42:0A:00:01:45  
        networks:  
            jellyfin-nw:  
                # Docker is pretty jacked up and can't get an IP via DHCP so manually specify it  
                ipv4_address: 10.0.1.69  
        user: 2200:2200  
        # gpu capability needs render capability, see the # for your server with `getent group render | cut -d: -f3`  
        group_add:  
          - "109"  
        security_opt:  
          - no-new-privileges:true  
        cap_drop:  
          - ALL  

Lastly thought I should add the external stuff needed for the hardware acceleration to work/get the user going:

# For jellyfin low power (LP) intel QSV stuff  
# if trouble see https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux  
sudo apt install -y firmware-linux-nonfree #intel-opencl-icd  
sudo mkdir -p /etc/modprobe.d  
sudo sh -c "echo 'options i915 enable_guc=2' >> /etc/modprobe.d/i915.conf"  
sudo update-initramfs -u  
sudo update-grub  

APP_NAME="jellyfin"  
APP_PID=2200  
sudo useradd -u $APP_PID $APP_NAME  

The Jellyfin user isn't added to the render group, rather the group is added to the container in the docker-compose.yml file.

[–] 5ymm3trY@discuss.tchncs.de 1 points 16 hours ago (1 children)

I have set all this up on my Asustor NAS, therefore things like apt install are not applicable in my use-case. Nevertheless, thank you very much for your time and expertise with regards to users and volumes. What is your strategy for networks in general? Do you setup a separate network for each and every container unless the services have to communicate with each other? I am not sure I understand your network setup in the Jellyfin container.

In the ports: part that 10.0.1.69 would be the IP of your server (or in this case, what I declare the jellyfin container’s IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand). With the macvlan driver the virtual network driver of your container behaves like its own physical network interface which you can assign a separate IP to, right? What advantage does this have exactly or what potential problems does this solve?

I wanted Jellyfin on its own IP so I could think about implementing VLANs. I havent yet, and I’m not sure what I did is even needed. But I did do it! You very likely don’t need to do it.

There are likely guides on enabling Jellyfin hardware acceleration on your Asustor NAS - so just follow them!

I do try to set up separate networks for each service.

On one server I have a monolithic docker compose file with a ton of networks defined to keep services from talking to the internet or each other if it’s not useful (pdf converter is prevented from talking to the internet or the Authentik database, for example). Makes the most sense here, has the most power.

On this server I have each service split up with its own docker compose file. The network bit makes more sense on services that have an external database and other bits, it lets me set it up so only the service can talk to its database and its database cannot reach the internet at large (via adding a ‘internal: true’ to the networks: section). In this case, yes the pdf converter can talk to other services and I’d need to block its internet access at the router somehow.

The monolithic method gets more annoying to deal with with many services via virtue of a gigantic docker compose file and the up/down time (esp. for services that don’t acknowledge shutdown commands). But it lets me use fine-grained networking within the docker compose file.

For each service on its own, they expose a port and things talk to them from there. So instead of an internal docker network letting Authentik talk to a service, Authentik just looks up the address of the service. I don’t notice any difference in perceptible lag.