I’m looking at different options for getting a NAS/RAID array system that is tolerant to not just hard drive failures but also to hardware/firmware and board failures. I’ve utilized a RAID array in the past that was built into the motherboard, which resulted in the motherboard failing and me having to ebay another one to get the RAID array back up and running. Then I bought a NAS 2 bay drive that was only compatible with drives up to 1.5TB. I’ve also used external drives for backup since I’ve been burned by hardware/firmware/software issues related to RAID arrays. Are there are any PCI RAID cards, NAS boxes or software RAID or other options where the hard drives would still be readable by other RAID cards if the boards failed? Maybe a software RAID solution? Any thoughts would be appreciated.
If you know Linux, I recommend going with some form of software raid. A lot of people might recommend ZFS but I would recommend btrfs with Linux. Using btrfs you can add and removed drives of any size at will unlike ZFS. That and with btrfs you don’t need to worry about vdevs and stuff. Simple, easy to use, and simple to upgrade. Just use btrfs, set data to raid1 and metadata to raid1c3 and you will have a rock solid system. That and you won’t have to worry about dkms or kernel changes breaking your data storage. Also before someone mentions it there was a btrfs raid5 write hole but that was fixed in Kernel 6.2
Another future interesting option might be btreefs. Just got merged into kernel mainline and has amazing features.
I’m going to say avoid btrfs, it’s still basically in beta. I want to see wide use in industry and functions like competitors - a la mdadm / vdev and ZFS have.
OpenSuse, and its commercial sister have been default using btrfs for almost a decade. The “btrfs is beta” meme is a dead horse. Its a great file system for what it was designed to do.
ZFS zRAID is pretty good for this I think. You hook up the drives from one “pool” to a new machine, and ZFS can detect them and see that they constitute a pool and import them.
I think it still stores some internal references to which drives are in the pool, but if you add the drives from the by-ID directory when making the pool it ought to be using stable IDs at least across Linux machines.
There’s also always Git Annex for managing redundancy at the file level instead of inside the filesystem.
Thanks, reading up on ZFS now on Ars https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/
Sounds like I could dedicate a server machine to run a zRAID 1,2 or 3 with ZFS drives running on Linux or TrueNAS? Or were you thinking something a bit different for a setup?
That could work fine, probably? Or you could use it on the same machine as other stuff.