this post was submitted on 05 Nov 2025
45 points (97.9% liked)

Selfhosted

52787 readers
534 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I'm trying to migrate services from my NAS (currently docker) to this machine.

How should Jellyfin be set up, lxc or vm? I don't have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech's setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn't working for me: curl doesn't work on my machine, most install scripts don't work, nano edits crash, and mounts are inconsistent.

My Synology NAS is mounted to the host, but making mount points to the lxc doesn't actually connect data. For example, if my NAS's media is in /data/media/movies or /data/media/shows and the host's SMB mount is /data/, choosing the lxc mount point /data/media should work, right?

Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don't persist.

Any suggestions for resource allocation? I've been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.

If you suggest command lines, please keep them simple as I have to manually type them in.

Here's the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe

you are viewing a single comment's thread
view the rest of the comments
[–] LazerDickMcCheese@sh.itjust.works 1 points 3 hours ago (1 children)

So should I be disabling some hardware decoding options then?

[–] curbstickle@anarchist.nexus 1 points 3 hours ago (1 children)

Might be a better question for someone who knows more JF ffmpeg configs, but I think the HEVC up top should be checked and the bottom range extended hevc should be unchecked. I think you should have AV1 support too.

Worst case, start with h264 and move down the list

[–] LazerDickMcCheese@sh.itjust.works 1 points 2 hours ago (1 children)

Great point actually, time for c/jellyfin I think. Would you mind helping me with the transferal of config and user data? Is "NFS mount NAS docker data to host" > "pass NFS to jelly LXC" > "copy data from NAS folder to LXC folder" the right idea?

[–] curbstickle@anarchist.nexus 2 points 2 hours ago (1 children)

Also may be good for c/jellyfin, but what I'd see if you could do is leverage a backup tool. Export and download, then import, all from the web. I know there is a built in backup function, and I recall a few plugins as well that handled backups.

Seems to me that might be the most straightforward method - but again, probably better with a more jellyfin focused comm for that. I have moved that LXC around between a bunch of machines at this point, so snapshots and backups via proxmox backup server are all I need.

[–] LazerDickMcCheese@sh.itjust.works 2 points 2 hours ago (1 children)

Yeah, it seems like the transplanting of LXCs, VMs, and docker is fairly pain-free...where I really shot myself in the foot is starting on an underpowered NAS and network transfers are clearly not my friend.

I'm not familiar with the backup stuff, but I remember hearing about it being added recently. I'll look into it, thanks for the recommendation.

You taught me a lot of stuff in just a couple days. The overwhelming/anxious part of dealing with Proxmox for me is still the pass-through of data from outside devices. VMs aren't bad at all, but everything else seems like a roll of the dice to see if the machine will allow the connection or not

[–] curbstickle@anarchist.nexus 2 points 1 hour ago (1 children)

It definitely is, especially if you get a cluster going. FWIW, my media is all on a synology NAS (well technically two, but one is a backup) that I got used through work, so your setup isn't the wrong approach (imo) by any stretch.

What it comes down to in the connection is how you look at it - with a VM, its a full fledged system, all by its lonesome, that just happens to live inside another computer. A container though is an extension of that host, so think of it less like a VM and more like resource sharing, and you'll start to see where the different approaches have different advantages.

For example, I have transcode nodes running on my proxmox cluster. If I had JF as a VM, I'd need another GPU to do that - but since its a container for both JF and my transcode node, they get to share that resource happily. Whats the right answer is always going to depend on individual needs though.

And glad I could be of some help!

[–] LazerDickMcCheese@sh.itjust.works 2 points 57 minutes ago

In case you want to keep following, I did make that post in c/jellyfin