Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Are there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I'm desperate, and I don't know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying "unknown." Regardless, I wanted to see if I could make it work anyway so I tried 'pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker'
102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.
Am I doing this in a stupid way? It kinda feels like it
For the record, I prefer NFS
And now I think we may have the answer....
OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we'd mount NFS directly inside the VM.
Did you make an LXC or a VM for 102?
If its an lxc, we can work out the command and figure out what's going on.
If its a VM, we'll get it mounted with NFS utils, but how is going to depend on what distribution you've got running on there (different package names and package managers)
Ah, that distinction makes sense...I should've thought of that
So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven't really connected anything, and I don't care how it's done as long as performance is fine)
Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.
systemctl list-units "*.mount"That said - I like to be sure, so lets do a few more things.
umount -R /mnt/pve/thatshare- Totally fine if this throws an errorcat /proc/mounts- a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line.nano /proc/mounts, find the line if its still there, and remove it.ctrl+xthenyto save.Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.
Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again:
ls -la /mnt/pve/thenameofyourmount/Is your data showing up? If so, great! If not, lets find out whats going on.
Now lets add back to your container mount. You'll need to add that mount point back in again with:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media(however you had it mounted before in that second step).Now start the container, and go to the console for the container.
ls -la /whereveryoumountedit- if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable "Start at Boot" if you'd like it to.Onto the VM, what distribution is installed there? Debian, fedora, etc?
Well, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container "101""
But the mount seems stable now. And the VM is Debian 12
That usually means something has changed with the storage, I'd bet there is a lingering reference in the .conf to the old mount.
The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?
For the VM, its really easy. Go to the VM, and open up the console. If you're logging in as root, commands as is, if you're logging in as a user, we'll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)
apt update && apt upgradeapt install nfs-commonmkdir /mnt/NameYourMountsudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMountls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we'll see whats going on.nano /etc/fstab192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0ctrl+xthenyls -la /mnt/NameYourMountto confirm you're all setI solved the LXC boot error; there was a typo in the mount (my keyboard sometimes double presses letters, makes command lines rough).
So just to recap where I am: main NAS data share is looking good, jelly's LXC seems fine (minus transcoding, "fatal player error"), my "docker" VM seems good as well. Truly, you're saving the day here, and I can't thank you enough.
What I can't make sense of is that I made 2 NAS shares: "A" (main, which has been fixed) and "B" (currently used docker configs). "B" is correctly connected to the docker VM now, but "B" is refusing to connect to the Proxmox host which I think I need to move Jellyfin user data and config. Before I go down the process of trying to force the NFS or SMB connection, is there any easier way?
Great!
Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?
For transferring all your JF goodies over, there are a few ways you can do it.
If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://nasip:5000/ and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.
The other option is to bring your docker container over to the new VM, but then you've got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.
I used the community script's lxc for jelly. With that said, the docker compose I've been using is great, and I wouldn't mind just transferring that over 1:1 either...whichever has the best transcoding and streaming performance. Either way, I'm unfortunately going to need a bit more hand-holding
LXC is going to be better, IMO. And we can definitely get hardware acceleration going.
So first, let's do this from the console of the lxc:
Is there something like card0 and renderD128 listed?