InvertedParallax
Hopefully IBM kills redhat with their shit touch like everything else and put them out of our misery.
Similar to yours, I originally didn't have many small files, but I turned on sonarr metadata and now there are tons of 1k files everywhere.
I think zfs keeps them compacted though.
So far, this seems pretty simple: set volblocksize=64K, you get 64KiB blocks in your zvol, and that’s that. But recordsize is a bit trickier: the blocks in a dataset are dynamically sized, and recordsize sets the maximum size for blocks in that dataset—not a fixed size.
https://klarasystems.com/articles/tuning-recordsize-in-openzfs/
So I wasn't worried about the small files in the beginning, the major reason to have smaller recordsize is if you want to make small accesses within a file, not if you want to access small files.
That's fine, or someone with a brain come to pipewire systemd into decent software, either way.
Isn't it cheaper to just buy a used Lada and tape painted bricks to it?
Have a video dataset with 1m recordsize, primarycache=metadata, secondarycache=metadata, and a general dataset as parent with 128kb recordsize, primarycache=secondarycache=normal, compression=lzma or lz44 or something.
Works like a monster, I don't worry about things like srts and such, though your symlinks idea looks interesting.
I'm reworking my entire system to get off the filesystem structure anyway and use python and some other dB possibly reading from sonarr for metadata seeding, but haven't got to it yet.
Actually, you make a good point, what would be nice is if sonarr put nfos in a different structure, but since I'm going to read sonarr metadata I can just delete them anyway.
If you nuked half of Texas, how could you tell which half is which?
I guess one side would have somewhat fewer epic assholes.
Doesn't this /c have a rule against breaking opsec?!?!