https://community-scripts.github.io/ProxmoxVE/scripts?id=jellyfin
This is the way I'd imagine. I used this for Plex and this should make iGPU a lot easier.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
https://community-scripts.github.io/ProxmoxVE/scripts?id=jellyfin
This is the way I'd imagine. I used this for Plex and this should make iGPU a lot easier.
I run jellyfin on an LXC, so first get jellyfin installed personally I would separate jellyfin and your other docker containers, I have a separate VM for my podman containers. I need jellyfin up 100% of the time so that's why its separate.
Work on the first problem, getting jellydin installed I wouldn't use docker, just follow the steps for installing it on Ubuntu directly.
Second, to get the unprivileged lxc to work with your nas share follow this forum post: https://forum.proxmox.com/threads/tutorial-unprivileged-lxcs-mount-cifs-shares.101795/
Thirdly, read through the jellyfin docs for hardware acceleration. Its always best practice to not just run scripts blindly on your machine.
Lastly take a break if you can't figure it out, when I'm stuck I always need to take a day and just think stuff over and I usually figure out why its not working by just doing that.
If you need any help let me know!
So I got Jellyfin running last night as an unprivileged LXC using a community script. It's accessible via web browser, and I could connect my NAS. Now I'm having NAS-server connection issues and "fatal player" issues on certain items. I appreciate the support, I'm going to need a lot of it haha
curl doesn't work on my machine, most install scripts don't work, nano edits crash, and mounts are inconsistent.
If your system is that fucked, I would wipe it and start over. And don't run any scripts or extra setup guides, they're not necessary.
Personally I run all my containers in a Debian VM because I haven't bothered migrating them to anything proxmox native. But gpu accel should work fine if you follow the directions from jellyfin: https://jellyfin.org/docs/general/post-install/transcoding/hardware-acceleration/
Just make sure you follow the part about doing it in docker.
That's where I'm at, dude. I bought into the idea of Proxmox because I was led to believe that it makes docker deployment easier...but I'm thinking it would actually work if I just used a VM
Like docker directly on proxmox? Docker on proxmox isn't going to be any better than docker on anything else.
VMs and LXC are where proxmox has its best integration.
Docker in a VM on proxmox, while maybe not the recommended way of doing things, works quite well though.
I don't know if containers on proxmox is easy, but containers in a Debian VM is trivial.
It may be better now but I’ve always had problems with Docker in LXC containers; I think this has to do with my storage backend (Ceph) and the fact that LXC is a pain to use with network mounts (NFS or SMB); I’ve had to use bind mounts and run privileged LXCs for anything I needed external storage for.
Proxmox is about managing VMs and LXCs. I’d just create a VM and do all your docker in there. Perhaps make a second VM so you can shuffle containers around while doing upgrades.
If you plan to have your whole setup be exclusively Docker and you have no need for VMs or LXCs, then Proxmox might be a bunch of overhead you don’t need.
I use the LXCs for simple stuff that does a bare-metal type install within them, and I use the VMs for critical services like OPNSense firewall/routers. I also have a Proxmox cluster across three machines so I can live-migrate VMs during upgrades and prevent almost any downtime. For that use case it’s rock solid. It’s a great product and it offers a lot.
If you just need a single machine and only Docker, it’s probably overkill.
Well, the plan was to use a couple VMs for niche things that I'd love to have and many services. But if I can't get Proxmox working as advertised, I'll throw most of that out of the window
The easiest solution if you want to have managed VMs IMHO is to just make a large VM for all your docker stuff on Proxmox and then you get the best of both worlds.
Abstracting docker into its own VM isn’t going to add THAT much overhead, and the convenience of Proxmox for management of the other VMs will make that situation much easier.
LXC for docker can be made to work, but it’s fiddly and it probably won’t gain you much in the long run.
Now, all these other issues you seem to be having with the Proxmox host itself; are you sure you have networking set up correctly, etc? curl should be working no problem; I’m not sure what’s going on there.
That's good to know at least. I was getting anxious last night thinking that I signed up for something I'd never get running. So curl is working now...not sure why it wasn't earlier, but I've used it since and it is confirmed working. And networking (as in internet connectivity) is working, but now I'm struggling with the NAS mount: it was working perfectly at first, but now it's randomly shifting between "available" and "unknown".
How should Jellyfin be set up, lxc or vm
Either way. I prefer lxc, personally, but to each their own. lxc I think is drastically easier, in part because you don't need to pass through the whole GPU....
Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano?
You don't need to pass the igpu, you just need to give the LXC access to render and video groups, but yes, editing the conf is easiest. I originally wrote out a bunch here, then remembered there is a great video.
https://www.youtube.com/watch?v=0ZDr5h52OOE
My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data
Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media
This is done from the host, not inside the LXC.
Does your host see the mounted NAS? After you added the mount point, did you fully stop the container and start it up again?
Edit: You can just install curl/wget/etc BTW, its just Debian in there.
apt install curl
Edit 2: I must have glossed over the mount part.
Dont add your network storage manually, do it through proxmox as storage, by going to Datacenter > Storage > Add, and enter the details there. This will make things a lot easier.
Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media
I'd love to check that, but you lost me...
So the NAS was added like you suggested; I can see the NAS's storage listed next to local data. How does one command an lxc or vm to use it though?
This line right here shares it with the LXC, I'll break it down for you:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media
pct is the proxmox container command, youre telling it to set the mount point (mp0, mp1, mp2, etc). That point on the host is /mnt/pve/yourmountname. In the container is on the right, mp=/your/path/. So inside the container if you did an ls command in the directory /your/path/, it would list the files in /mnt/pve/yourmountname.
The yourmountname part is the name of the storage you added. You can go to the shell at the host level in the GUI, and go to /mnt/pve/ then enter ls and you will see the name of your mount.
So much like I was mentioning with the GPU, what youre doing here is sharing resources with the container, rather than needing to mount the share again in your container. Which you could do, but I wouldn't recommend.
Any other questions I'll be happy to help as best as I can.
Edit: forgot to mention, if you go to the container and go to the resources part, you'll see "Mount Point 0" and the mount point you made listed there.
Are there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I'm desperate, and I don't know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying "unknown." Regardless, I wanted to see if I could make it work anyway so I tried 'pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker'
102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.
Am I doing this in a stupid way? It kinda feels like it
For the record, I prefer NFS
And now I think we may have the answer....
OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we'd mount NFS directly inside the VM.
Did you make an LXC or a VM for 102?
If its an lxc, we can work out the command and figure out what's going on.
If its a VM, we'll get it mounted with NFS utils, but how is going to depend on what distribution you've got running on there (different package names and package managers)
Ah, that distinction makes sense...I should've thought of that
So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven't really connected anything, and I don't care how it's done as long as performance is fine)
Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.
systemctl list-units "*.mount" That said - I like to be sure, so lets do a few more things.
umount -R /mnt/pve/thatshare - Totally fine if this throws an errorcat /proc/mounts - a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line. nano /proc/mounts, find the line if its still there, and remove it. ctrl+x then y to save.Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.
Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again: ls -la /mnt/pve/thenameofyourmount/
Is your data showing up? If so, great! If not, lets find out whats going on.
Now lets add back to your container mount. You'll need to add that mount point back in again with: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media (however you had it mounted before in that second step).
Now start the container, and go to the console for the container. ls -la /whereveryoumountedit - if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable "Start at Boot" if you'd like it to.
Onto the VM, what distribution is installed there? Debian, fedora, etc?
Well, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container "101""
But the mount seems stable now. And the VM is Debian 12
That usually means something has changed with the storage, I'd bet there is a lingering reference in the .conf to the old mount.
The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?
For the VM, its really easy. Go to the VM, and open up the console. If you're logging in as root, commands as is, if you're logging in as a user, we'll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)
apt update && apt upgradeapt install nfs-commonmkdir /mnt/NameYourMountsudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMountls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we'll see whats going on.nano /etc/fstab192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0ctrl+x then yls -la /mnt/NameYourMount to confirm you're all setI solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).
So just to recap where I am: main NAS data share is looking good, jelly's LXC seems fine (minus transcoding, "fatal player error"), my "docker" VM seems good as well. Truly, you're saving the day here, and I can't thank you enough.
What I can't make sense of is that I made 2 NAS shares: "A" (main, which has been fixed) and "B" (currently used docker configs). "B" is correctly connected to the docker VM now, but "B" is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?
Great!
Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?
For transferring all your JF goodies over, there are a few ways you can do it.
If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://nasip:5000/ and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.
The other option is to bring your docker container over to the new VM, but then you've got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.
I used the community script's lxc for jelly. With that said, the docker compose I've been using is great, and I wouldn't mind just transferring that over 1:1 either...whichever has the best transcoding and streaming performance. Either way, I'm unfortunately going to need a bit more hand-holding
Friend, thank you. My users and I greatly appreciate it. You just taught me how to solve one of the biggest problems I've been having. Just tested a movie through Jellyfin after using that cli.
Got any pointers for migrating config files from my NAS's docker containers to Proxmox's LXCs/VMs?
No worries!
So if you've got docker containers going already, you don't need them to be LXCs.
So why not keep them docker?
Now there are a couple of approaches here. A VM will have a bit higher overhead, but offers much better isolation than lxc. Conversely, lxc is lightweight but with less host isolation.
If we're talking the *arr stack? Meh, make it an lxc if you want. Hell, make it an lxc with dockge installed, so you can easily tweak your compose files from the web, convert a docker run to compose, etc.
If you have those configs (and their accompanying data) stored on the NAS itself - you dont have to move them. Let's look at that command again...
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media
So let's say your container data is stored at /opt/dockerstuff/ on your NAS, with subdirectories of dockerapp1 and dockerapp2. Let's say your new lxc is number 101. You have two options:
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff,mp=/opt/dockerstuff
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff/dockerapp1,mp=/opt/dockerstuff/dockerapp1
pct set 101 -mp1 /mnt/Pve/NAS/opt/dockerstuff/dockerapp2,mp=/opt/dockerstuff/dockerapp2
Either will get you going
I think I'm getting a grip on some of the basics here. I was trying to make a new mount for my NAS's docker data...separate drive and data pool. In the process of repeated attempts to get the SMB mount to get accepted, I noticed my NAS's storage isn't working as intended suddenly.
'cat /etc/pve/storage.cfg' shows the NAS still 'pvesm status' says "unable to activate storage...does not exist or is unreachable"
I thought it was related to too much resource usage, but that's not the case
What do you get putting in:
showmount <ip address of NAS>
"Hosts on 192.168.0.4:" As a novice, I get the feeling that means it's not working
If you've got nothing under it, yeah.
OK, what I'd probably do is shutdown proxmox, reboot your nas, wait for the nas to be fully up and running (check if you can access it from your regular computer over the lab), then boot up the proxmox server.
Then run that command again, you should see a result.
Its possible you've got some conflicting stuff going on if you did manual edits for the storage, which may need to be cleaned up.
I restarted everything like you suggested, same 'showmount' result unfortunately...I double checked the SMB mount in the datacenter, and the settings look correct to me. The NAS's storage icon shows that it's connected, but it seems like that doesn't actually mean it's *firmly *connected
Ok, lets take a step back then and check things this way.
In shell (So datacenter, the host, then shell), if you enter ls -la /mnt/pve/thenameofyourmount/, do you get an accurate and current listing of the contents of your nas?
Yes! I do
There is a helper script for jellyfin LXC. From memory I can't help much, but I suggest searching for that. I think the default specs for disk space and RAM were weak, But setup was easy enough. After the initial helper script, you will need to learn how to mount the NAS into the LXC as well.
I want to say iGPU makes things easier, not because of experience but only because I tried passing through an Nvidia card and the instructions all insinuated this was more difficult than any other option
If youre going LXC, its not going to matter much of you just map GIDs and provide the LXC access to the host.
Side bonus, multiple LXCs and they can all share that GPU. This is what I do, I have a couple of JF instances among other containers that use the GPUs.
Edited to add: Well, nvidia itself can be a pain. But that'd be because nvidia.