The original post: /r/datahoarder by /u/BlueSkull on 2024-07-04 13:35:51.
Long story short, I'm looking for a cheap NAS solution that can potentially saturate a bonded 2x 10Gb SFP+ link (with 4TB NVMe, not essentially necessary when accessing from spinning drives).
I will also need the very machine to link a few VLANs, so it must be running Linux and be capable of doing NAT and OpenVPN (not at 10Gb though, maybe 2.5Gb at most).
I'm thinking if I can use an R86S Pro with a (few) USB DAS attached to it. But I do have a few concerns beforehand:
- The CPU it sports, i3-N305, has no ECC support. With only on-die ECC of LPDDR5 (but running slow, I think the CPU supports only up to 4800MT/s, so link-ECC is highly unnecessary), will this be reliable enough for a NAS? I will NOT be running zfs or anything with a huge memory cache. Also, some say it is as powerful as an E5-2670v3, so not bad at all.
- The DAS shall have 5 bays. It could either be USB3.1-->USB3.1 to SATA3-->SATA3 RAID-->5x SATA3 or USB3.1-->HUB-->5x USB3.1-->5x USB3.1 to SATA3-->5x SATA3. Since I don't plan to do zfs or unraid, I'll just go with RAID5, it doesn't matter if I go with HW RAID or SW RAID. With the former route, the single SATA3 link between USB and RAID would be the bottleneck, capping things to some 550MB/s. With the latter route, the host PC would have to address multiple USB devices, hence adding multiplexing overhead, besides the redundancy traffic would also appear on the USB bus, so I get around 8Gbps (practical USB3.1 limit) x 0.8 (4+1 RAID5 efficiency) x maybe 0.9 (USB multiplexing efficiency), so around 720MB/s. I just don't know how exactly multiplexing between 5 USB SATA controllers will slow down the bus, maybe the 0.9 figure is too optimistic, I don't know.
- For a 5-bay (anything more than 4) hub-based enclosure (route 2 in previous comment), there has to be 2 1:4 hub controllers, one being downstream of the other. Will this kind of cascading negatively impact the performance? Since I will do uncached RAID, the access time is determined by the worst drive.
- For future expansion, I might add new groups of drives. How does route 1 in section 2 compares with route 2 in terms of expandability? Say, I want to add another 5 drives without using zfs or unraid, if I went with route 1, I can't add drives to the fixed HW RAID, so the new storage will only increase capacity, not speed. If I went with route 2, the added drives, if on a separate USB controller, can speed things up, but can only do that if I rebuild the RAID, right? Meaning I have to backup everything and wipe everything, only then can I add the new drives, and I have to do this each time I add drives, and apparently it takes linearly longer each time.
- Is it possible to have a partition be available both on NFS and SMB? I would like some to be shared to TVs and Windows/Android devices, which require SMB, and some to be shared to Linux workstations which require much lower latency, hence requiring NFS.
For those asking me to buy a full NAS, I hope I could afford one. Those with 10Gb connectivity and can actually use 10Gb are way above my budget. The R86S Pro is essentially free since I need it anyways for my 10Gb network, and the drives, well, I have cheap sources for them.