The initial failure leaves holes in most of the data groups (in this simplified diagram, stripes): 0īut when we resilver, we do so onto the previously reserved spare capacity: 0 Let's take the simplified diagram above and examine what happens if we fail a disk out of the array. If a disk fails in a dRAID vdev, the parity and data sectors which lived on the dead disk are copied to the reserved spare sector(s) for each affected stripe.
AdvertisementĭRAID takes this concept-distributing parity across all disks, rather than lumping it all onto one or two fixed disks-and extends it to spares.
RAID5 did away with the fixed parity drive, and distributed parity throughout all of the array's disks instead-which offered significantly faster random write operations than the conceptually simpler RAID3, since it didn't bottleneck every write on a fixed parity disk. The first parity RAID topology wasn't RAID5-it was RAID3, in which parity was on a fixed drive, rather than being distributed throughout the array. In a world of perfect vacuums, frictionless surfaces, and spherical chickens, the on-disk layout of a draid2:4:1 would look something like this: 0Įffectively, dRAID is taking the concept of "diagonal parity" RAID one step farther. We created a single dRAID vdev with 2 parity devices, 4 data devices, and 1 spare device per stripe-in condensed jargon, a draid2:4:1.Įven though we have 11 total disks in the draid2:4:1, only six are used in each data stripe-and one in each physical stripe. In the above example, we have 11 disks: wwn-0 through wwn-A. We can see this in action in the following example, lifted from the dRAID Basic Concepts documentation: zpool create mypool draid2:4d:1s:11c wwn-0 wwn-1 wwn-2. These numbers are independent of the number of actual disks in the vdev. When creating a dRAID vdev, the admin specifies a number of data, parity, and hotspare sectors per stripe. Distributed RAID (dRAID) is an entirely new vdev topology we first encountered in a presentation at the 2016 OpenZFS Dev Summit. Distributed RAID (dRAID) overviewįurther Reading ZFS 101-Understanding ZFS storage and performanceIf you already thought ZFS topology was a complex topic, get ready to have your mind blown. Since then, it's been heavily tested in several major OpenZFS development shops-meaning today's release is "new" to production status, not "new" as in untested. dRAID has been under active development since at least 2015 and reached beta status when merged into OpenZFS master in November 2020. Today, we're going to focus on arguably the biggest feature OpenZFS 2.1.0 adds-the dRAID vdev topology. This release offers several general performance improvements, as well as a few entirely new features-mostly targeting enterprise and other extremely advanced use cases. The new release is compatible with FreeBSD 12.2-RELEASE and up and Linux kernels 3.10-5.13.