Pete90

joined 2 years ago
[–] Pete90@feddit.de 4 points 2 years ago* (last edited 2 years ago)

Ah, I did not know that. So I guess I will create several VLANs with different subnets. This works as I intended it, trafic coming from one VM has to go through OPNsense.

Now I just have to figure out, if I'm being to paranoid. Should I simply group several devices together (eg, 10=Servers, 20=PC, 30=IoT; this is what I see mostly being used) or should I sacrifice usability for a more fine grained segeration (each server gets its own VLAN). Seems overkill, now that I think about it.

[–] Pete90@feddit.de 1 points 2 years ago* (last edited 2 years ago)

Nevermind, I am an idiot. You're comment gave me thought and so I checked my testing procedure again. Turns out that, completly by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard driver as the source. I did that about 10 times. Everything is fine now. THANKS!

[–] Pete90@feddit.de 1 points 2 years ago

Both machines are easily capable of reaching around 2.2Gbps. I can't reach full 2.5Gbps speed even with Iperf. I tried some tuning but that didn't help, so its fine for now. I used iperf3 -c xxx.xxx.xxx.xxx, nothing else.

The slowdown MUST be related to ZFS, since LVM as a storage base can reach the "full" 2.2Gbps when used as a smb share.

[–] Pete90@feddit.de 1 points 2 years ago

Its videos, pictures, music and other data as well. I'll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.

[–] Pete90@feddit.de 1 points 2 years ago

The disk is owned by to PVE host and then given to the container (not a VM) as a mount point. I could use PCIe passthrough, sure, but using a container seems to be the more efficient way.

[–] Pete90@feddit.de 1 points 2 years ago (5 children)

I meant mega byte (I hope that's correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size

[–] Pete90@feddit.de 3 points 2 years ago

I don't think it's the CPU as I am able to reach max speed, just not using ZFS...

[–] Pete90@feddit.de 2 points 2 years ago* (last edited 2 years ago) (7 children)

Good point. I used fio with different block sizes:

fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda

4K = IOPS=41.7k, BW=163MiB/s (171MB/s)
8K = IOPS=31.1k, BW=243MiB/s (254MB/s)
IOPS=13.2k, BW=411MiB/s (431MB/s)
512K = IOPS=809, BW=405MiB/s (424MB/s)
1M = IOPS=454, BW=455MiB/s (477MB/s)

I'm gonna be honest though, I have no idea what to make of these values. Seemingly, the drive is capable of maxing out my network. The CPU shouldn't be the problem, it's a i7 10700.

[–] Pete90@feddit.de 6 points 2 years ago (3 children)

Tubearchivist works well for me and integrates with jellyfin.

[–] Pete90@feddit.de 5 points 2 years ago (2 children)

Tubearchivist works great for me. Downloader, database and player, all in one. Even integration with jellyfin is possible, not sure about plex though.

[–] Pete90@feddit.de 2 points 2 years ago

Ah, thank you for clearing that up, much appreciated!

[–] Pete90@feddit.de 1 points 2 years ago

Excellent, I'll probably do that then. If I think about it, only one container needs write access so I should be good to go. User/permissions will be the same, since it's docker and I have one user for it. Awesome!

view more: ‹ prev next ›