RAID Calculator – Capacity, Redundancy & Drive Requirements

Calculate usable storage, raw capacity, and fault tolerance for every common RAID level: RAID 0, 1, 1E, 3, 4, 5, 5E, 5EE, 6, 10, 50, 60, ZFS RAID-Z1/Z2/Z3, RAID-DP, and JBOD. See how many drives you need, how much space you get, and how many failures you can survive. Runs entirely in your browser – no data uploaded.

Unit base:

Result

MetricValue
MetricValue
Legend: Good Average Poor Neutral

RAID Levels – What they are, pros & cons, practical drive counts & tips

Short, practical summaries. Use Calculate to set the calculator to that level.

RRead: Very high WWrite: Very high RebRebuild: N/A CapCapacity: 100% ResResilience: None

Nature: Pure striping; any disk failure loses the array.

  • Pros: Max throughput; simple; full capacity.
  • Cons: Zero fault tolerance; backups essential.

Use cases: Scratch/temp data, non-critical high-speed workloads.

Practical minimum drives2
Practical maximum~12–16 (risk scales with N)
Sweet spot2–8
Spare guidanceN/A

RRead: High WWrite: Single-disk RebRebuild: Fast CapCapacity: ~50% ResResilience: High

Nature: Identical copies (mirrors). Survives a single-disk failure per mirror.

  • Pros: Simple; fast reads; quick predictable rebuilds.
  • Cons: 50% efficiency; write ≈ single disk.

Use cases: Small servers, boot volumes, simple 2–4 drive NAS.

Practical minimum drives2
Practical maximum2–4 per set
Sweet spot2–4
Spare guidanceOptional; ~1 per 8–12 drives

RRead: High WWrite: Parity penalty RebRebuild: Moderate risk CapCapacity: ~(N−1)/N ResResilience: 1 disk

Nature: Striping with single distributed parity. One disk may fail; rebuilds read all remaining disks.

  • Pros: Good efficiency; common; easy scaling.
  • Cons: Parity write penalty; rebuild risk with very large disks.

Use cases: Read-heavy NAS, media libraries. Prefer RAID 6/Z2 for very large disks.

Practical minimum drives4 (3 works; 4+ safer)
Practical maximum~8–10 (bigger → consider RAID 6)
Sweet spot4–10
Spare guidance≥1 per 8–12 drives

RRead: Good WWrite: Heavier parity RebRebuild: Safer than RAID 5 CapCapacity: ~(N−2)/N ResResilience: 2 disks

Nature: Like RAID 5 but with two parity blocks; safer rebuilds on large disks.

  • Pros: Strong protection with big/nearline disks.
  • Cons: More parity overhead; heavier writes.

Use cases: Large-disk arrays, SMB file servers, capacity-first with safety.

Practical minimum drives6 (4 works; 6–12 sensible)
Practical maximum~12 (larger → RAID 60)
Sweet spot6–12
Spare guidance≥1 per 8–12 drives

RRead: High WWrite: High (mixed) RebRebuild: Fast CapCapacity: ~50% ResResilience: High

Nature: Mirror pairs striped together (even drive count). Excellent random and mixed writes; very fast rebuilds.

  • Pros: Great mixed performance; resilient; quick rebuilds.
  • Cons: 50% efficiency; needs even N.

Use cases: Databases, VMs, application servers needing steady low-latency writes.

Practical minimum drives4 (even)
Practical maximum~12
Sweet spot4–12 (even)
Spare guidance≥1 per 8–12 drives

RRead: High (parallel) WWrite: Parity groups RebRebuild: Group-local CapCapacity: High ResResilience: 1 per group

Nature: Several RAID 5 groups striped together. Fails independently by group; better rebuild dynamics than a giant R5.

  • Pros: Scales capacity/perf; parallel rebuilds.
  • Cons: Single parity per group; group sizing matters.

Use cases: High-capacity arrays needing faster rebuilds than monolithic RAID 5.

Practical minimum drives6 (≥2 groups of ≥3)
Practical maximum~24 (groups of 3–10)
Sweet spot2–3 groups × (3–8 drives)
Spare guidance≥1 per 8–12 drives

RRead: High (parallel) WWrite: Dual parity RebRebuild: Safer (per group) CapCapacity: Medium-high ResResilience: 2 per group

Nature: Several RAID 6 groups striped; best for large arrays and very large disks.

  • Pros: Strong protection at scale; balanced trade-offs.
  • Cons: Less efficient than RAID 50; more complex.

Use cases: Enterprise/high-capacity arrays where safety during rebuild is key.

Practical minimum drives8 (≥2 groups of ≥4)
Practical maximum~24 (groups of 4–10)
Sweet spot2–3 groups × (4–8 drives)
Spare guidance≥1 per 8–12 drives

RRead: High WWrite: Parity RebRebuild: Moderate risk CapCapacity: ~(N−1)/N ResResilience: 1 disk

Nature: Single-parity ZFS vdev with checksums, scrubs, snapshots, self-healing.

  • Pros: ZFS integrity; good efficiency; features.
  • Cons: Prefer Z2 on very large disks for rebuild safety.

Use cases: Smaller ZFS pools, read-heavy datasets; move to Z2 as disks grow.

Practical minimum drives4 (3 works; 4–8 preferred)
Practical maximum~8
Sweet spot4–8 per vdev (consistent widths)
Spare guidance≥1 per 8–12 drives

RRead: Good WWrite: Dual parity RebRebuild: Safer CapCapacity: ~(N−2)/N ResResilience: 2 disks

Nature: Dual-parity ZFS vdev; sensible default for high-capacity disks.

  • Pros: Safer rebuilds; end-to-end checksums and snapshots.
  • Cons: Lower efficiency than Z1; parity write cost.

Use cases: General-purpose ZFS pools with multi-TB drives; most home/SMB ZFS setups.

Practical minimum drives6 (4 works; 6–12 preferred)
Practical maximum~12
Sweet spot6–12 per vdev
Spare guidance≥1 per 8–12 drives

RRead: Good WWrite: Triple parity RebRebuild: Safer (large pools) CapCapacity: ~(N−3)/N ResResilience: 3 disks

Nature: Triple-parity ZFS vdev for very large pools or critical uptime.

  • Pros: Very strong protection; large-pool friendly.
  • Cons: Lower efficiency; higher parity overhead.

Use cases: Big ZFS pools and/or mission-critical data with long rebuild windows.

Practical minimum drives9 (5 works; 9–15 realistic)
Practical maximum~15
Sweet spot9–15 per vdev
Spare guidance≥1 per 8–12 drives

RRead: Good WWrite: Dual parity RebRebuild: Safer CapCapacity: ~(N−2)/N ResResilience: 2 disks

Nature: Vendor-tuned double parity (conceptually like RAID 6) for safe rebuilds on large disks.

  • Pros: Strong protection; tuned rebuild behavior.
  • Cons: Parity write cost; platform-specific.

Use cases: NetApp arrays needing long-disk rebuild safety.

Practical minimum drives6 (4 works; 6–12 typical)
Practical maximum~12
Sweet spot6–12
Spare guidance≥1 per 8–12 drives

RRead: High WWrite: Mirror penalty RebRebuild: Layout-dependent CapCapacity: ~50% ResResilience: Varies

Nature: Mirrored stripes supporting odd drive counts; fault tolerance depends on which disks fail.

  • Pros: Works with odd N; better than simple mirrors for odd counts.
  • Cons: Uneven failure tolerance; less common.

Use cases: Odd-count build needing mirror-like behavior.

Practical minimum drives3
Practical maximum~7
Sweet spot3–7
Spare guidance≥1 per 8–12 drives

RRead: Good WWrite: Parity bottleneck RebRebuild: Stressful CapCapacity: ~(N−1)/N ResResilience: 1 disk

Nature: Dedicated parity disk with byte-level striping; largely historical due to bottleneck.

  • Pros: Simple concept.
  • Cons: Heavy write bottleneck; rarely recommended today.

Use cases: Legacy environments only; consider RAID 5/6 instead.

Practical minimum drives3
Practical maximum~8
Sweet spot3–8 (niche)
Spare guidance≥1 per 8–12 drives

RRead: High WWrite: Parity bottleneck RebRebuild: Stressful CapCapacity: ~(N−1)/N ResResilience: 1 disk

Nature: Dedicated parity disk with block-level striping; superseded by RAID 5/6.

  • Pros: Predictable layout.
  • Cons: Parity bottleneck; uncommon now.

Use cases: Legacy only; consider RAID 5/6.

Practical minimum drives3
Practical maximum~8
Sweet spot3–8 (niche)
Spare guidance≥1 per 8–12 drives

RRead: High WWrite: Parity RebRebuild: Faster (area) CapCapacity: High ResResilience: 1 disk

Nature: RAID 5 with reserved spare area to accelerate rebuilds.

  • Pros: Potentially faster rebuilds than classic R5.
  • Cons: Implementation-specific; still single parity.

Use cases: Controllers that support 5E; need quicker rebuilds without moving to RAID 6.

Practical minimum drives5 (allow spare area)
Practical maximum~10
Sweet spot5–10
Spare guidanceBuilt-in + optional 1 global / 8–12 drives

RRead: High WWrite: Parity RebRebuild: Smoother CapCapacity: High ResResilience: 1 disk

Nature: RAID 5 with spare capacity interleaved to reduce hotspots during rebuild.

  • Pros: Smoother rebuilds; retains efficiency.
  • Cons: Controller-specific; still single parity.

Use cases: Controllers supporting 5EE needing faster/smoother rebuilds.

Practical minimum drives5
Practical maximum~10
Sweet spot5–10
Spare guidanceInterleaved + optional 1 global / 8–12 drives

RRead: Single-disk WWrite: Single-disk RebRebuild: N/A CapCapacity: 100% ResResilience: None

Nature: No RAID. Disks independent/concatenated; disk failure loses that disk’s data.

  • Pros: Full raw capacity; simple.
  • Cons: No redundancy; backups critical.

Use cases: Non-critical data, backup targets (with external redundancy).

Practical minimum drives1
Practical maximumController dependent
Sweet spotN/A (use-case driven)
Spare guidanceN/A

Frequently Asked Questions

It shows total raw capacity, usable capacity after parity and hot spares, redundancy (how many drive failures you can survive), and simple read/write/rebuild indicators. Results appear in GB or TB with exact bytes, using your chosen unit base (IEC 1024 or SI 1000).

RAID 0, 1, 1E, 3, 4, 5, 5E, 5EE, 6, 10, 50, 60, ZFS RAID-Z1/Z2/Z3, RAID-DP, and JBOD. You can also pick IEC/SI units, set RAID 50/60 group size, use global hot spares (where supported), copy results/tables, and share a permalink.

No. RAID helps keep systems online when a drive fails. It will not protect you from accidental deletion, malware, corruption, or theft. Always maintain separate, versioned backups—ideally offline or off-site.

Drive vendors label capacity in SI units (1 GB = 1,000,000,000 bytes) while most operating systems report IEC (1 GiB = 1,073,741,824 bytes). Filesystems and metadata also consume space. The calculator shows exact bytes in either base so the numbers make sense.

You can, but the array usually uses the size of the smallest drive in each set, so larger drives waste capacity. Mixing speeds or interfaces can slow the whole array. For best results, keep drives matched.

They are simple indicators of how many drives can help on reads and roughly how much parallelism you can expect for writes or rebuilds. Real-world performance still depends on controller, cache, filesystem, and workload.

Yes. SSDs and NVMe drives make arrays very fast. Use matched SSDs, enable TRIM where supported, and check your controller/filesystem for SSD awareness. For heavy writes, prefer RAID 10 or parity arrays with a good write cache.

SSDs rebuild quickly but can wear out at similar times if they are identical age/model. Monitor health (SMART), leave spare capacity (over-provisioning), and keep backups. NVMe bandwidth can bottleneck at the controller—plan PCIe lanes/slots carefully.

Desktop “RAID” (e.g. Intel RST) often depends on drivers and can be difficult to migrate. Software RAID (mdadm, Windows Storage Spaces, ZFS) is portable and widely documented. Dedicated hardware RAID adds features but ties you to that card. For most desktops and homelabs, modern software RAID or ZFS is the simplest and most flexible.

Bigger drives take longer to rebuild, which leaves more time for a second failure or an unrecoverable read error. Dual or triple parity (RAID 6/Z2/Z3/DP), regular scrubs, and monitoring drive health greatly reduce this risk.

It depends. Many classic RAID sets are hard to reshape. ZFS grows by adding vdevs; changing an existing RAID-Z vdev’s width depends on your platform/version. Always plan layout up front and check your system’s documentation.

With true hardware RAID, arrays may only be recoverable on the same model controller. Software RAID and ZFS are portable across systems as long as you keep metadata. Document your configuration and keep controller firmware/driver copies if using hardware RAID.

RAID handles block layout, but the filesystem handles data integrity and features. ZFS and Btrfs add checksums, snapshots, and scrubs on top of RAID-like redundancy. Traditional filesystems (NTFS, ext4, XFS) rely only on the RAID layer for redundancy.