I have had a btrfs raid1c3 out of 3x20TB disks myself. It was really unhelpful when I used AOSP (via USB) as it didnt tell me why it did make it read only... not using AOSP would have helped
Recently I switched over to a real server, with a HBA and immediate connection.
Then I googled, and the following points were made:
btrfs: Super bad because silently is overfilled and getting NOENT - all disk used, but regular rebalance is completely not recommended to avoid this issue altogether. Apparently misrepresents actual disk space.
ZFS: Super bad because will never be in Linux tree, and is hard to maintain, apparently "just hype"
https://www.reddit.com/r/zfs/comments/sfo1tq/linus_tech_tips_fails_at_using_zfs_properly_loses/
https://storytime.ivysaur.me/posts/why-not-zfs/
https://github.com/openzfs/zfs/labels/Type%3A%20Defect
And then we had that corruption bug in ZFS. Backups are most important!
But then people in ZFS IRC tell me to instead use multiple different filesystems and just hope to have one that doesnt break, and to start with ext4, cause its the easiest to repair...
[...]
So in the end I read, use ext4 or XFS on a single disk but have 2 offline backups offsite.
Whats your verdict? My server is as follows:
64GB registered ECC RAM (single bit correction)
Intel i3-9100
10x20TB HDD installed - 10 slots empty
2x2TB SSD
SuperMicro X11SCL-F Motherboard
My Data is unique in the sense of that I put lots of time in it, and that I would not want to re-do all that stuff at any given point.