What is RAID?

Redundant Array of Inexpensive Disks is a set of technology standards for teaming disk drives which allows high level of storage availability.  RAID is not a backup solution.  It is used to improve disk (I/O) performance and reliability (fault tolerance) of your workstation.

Hardware RAID Vs Software RAID

RAID can be deployed in  both software and hardware. The comparison of both s/w and h/w RAID s based on the following features.

Cost, Complexity, Write Back Caching  (BBU-Battery backup Unit) , Performance, Overheads (CPU,RAM etc.,) , Disk hot swapping , Hot spare support, /boot partition, Opensource factor, Faster rebuilds , Higher write throughput,

Can RAID Array Fail?

Yes. The entire RAID array can fail taking down all your data (yes hardware RAID card do dies). Use tapes and other servers that can hold copies of the data, but don’t allow much interaction with it. M ove your data offsite. Another option is to use two or three RAID cards. Combine them together to protect your data. This make sure you gets back your data when one of your RAID card dies out.

 Types of RAID Levels:

RAID 0, RAID 1, RAID 5, RAID 1+0, RAID 0+1


 Minimum requirement of Hard disks – 2


  • High performance
  • Easy to implement
  • No parity overhead
  • Read/Write is good



  • No fault tolerance because no redundancy.


RAID 1 (Mirroring)


  • Minimum hard disk is 2 (2N).  Writes all the data mirrored disks.


  • Fault tolerant, if one disk failed can retrieve data from working disk. No data lost.
  • Easy to recover data
  • High read performance.
  • Easy to implement



  • Low  write performance
  • Very  costly

Suggested Users

Small databases and Critical applications

RAID 5           

Stripes the data at a block level across several drives, with parity equality distributed among the  drives. The parity information allows recovery from the failure of any single drive.

  • Block level striping
  •  Minimum   disks requirement is 3 .
  • Calculation N + 1 (N= 2 disks)
  • Limitation  of disks  Practical 15 disks max but theoretical 32 hdisks


If 3 36GB drives would hold the necessary data, then 4 36 GB drives would be needed to implement RAID 5 and maintain a total of 108 GB of available data space.


  • Read performance is good.
  • Fault tolerant and redundancy becoz of distributed parity
  • Good. Can tolerate loss of one drive
  • Provide more disk space than RAID 1


  • Disk failure has a medium impact on throughput
  • If 2 disk failure occurs , then there is a  data loss
  • More complex to design

Suggested servers

Mid size financial databases, applications

Parity :

Parity is an error correction technique commonly used in certain RAID levels. It is used to reconstruct data on a drive that has failed in an array.

There are two types of parity bits: even parity bit and odd parity bit. An even parity bit is set to 1 if the number of ones in a given set of bits is odd (making the total number of ones even). An odd parity bit is set to 1 if the number of ones in a given set of bits is even (making the total number of ones odd).

 Parity Block

The parity block is used by  certain RAID levels, Redundancy  is achieved  by the use of parity blocks. If a single drive of the array fails, data blocks and a parity block from the working drives can be combined to reconstruct the data.

Given the diagram below, where each column is a disk, assume A1 = 00000111, A2 = 00000101, and A3 = 0000000. Ap, generated by XORing A1, A2, and A3, will then equal 00000010. If the second drive fails, A2 will no longer be accessible, but can be reconstructed by XORing A1, A3, and Ap:


A1 XOR A3 XOR Ap = 00000101

 RAID 1+0 (Stripped Mirroring)

RAID 10 is also called RAID 1+0

  • Its also called stripe of mirrors
  • It requires minimum 4 disks
  • Disk 1 and Disk 2 are mirrored to Group 1 striping, hence if disk 1 fails no disruption occurs in data accessing.



  • Fault tolerance is high
  • High I/O performance
  • Faster rebuilder when compared to  RAID 0+1
  • Under certain circumstances, RAID 10 array can sustain multiple simultaneous drive failures



  • Very expensive
  • High overhead
  • Very limited scalability


RAID 0+1 (Mirrored Striping)


  • RAID 01 or RAID 0+1 is also called  Mirrored striping , in which  two (or more) stripes of several disks, which then are mirrored onto eachother.
  • Minimum no of disk requirement is 4



  • Fault tolerant
  • High I/O
  • Performance and availability  are same as RAID 10



One failing disk invalidates a whole stripe set, two disks failing on

Either side of the mirror renders the volume unusable

  • The  recovery takes longer as the volume’s whole content

Needs to be re-mirrored to the repaired stripe.








How is RAID 10 overcome RAID 5 ?

 RAID 10 performs better than RAID 5 in read and write, since it does not need to manage parity.

Only the cost matters in RAID 10 when compared to RAID,  other than it will provide high data security and performance.

How is RAID 1+0 different than RAID 0+1?


 We have 20 disks to form the RAID 1+0 or RAID 0+1 array of 20 disks.

a) If we chose to do RAID 1+0 (RAID 1 first and then RAID 0), then we would divide those 20 disks into 10 sets of two. Then we would turn each set into a RAID 1 array and then stripe it across the 10 mirrored sets.

b) If on the other hand, we choose to do RAID 0+1 (i.e. RAID 0 first and then RAID 1), we would divide the 20 disks into 2 sets of 10 each. Then, we would turn each set into a RAID 0 array containing 10 disks each and then we would mirror those two arrays. So, is there a difference at all? The storage is the same, the drive requirements are the same and based on the testing also, there is not much difference in  erformance either. The difference is actually in the fault tolerance. Let’s look at the two steps that we mentioned above in more detail:

RAID 1+0:

Drives 1+2 = RAID 1 (Mirror Set A)

Drives 3+4 = RAID 1 (Mirror Set B)

Drives 5+6 = RAID 1 (Mirror Set C)

Drives 7+8 = RAID 1 (Mirror Set D)

Drives 9+10 = RAID 1 (Mirror Set E)

Drives 11+12 = RAID 1 (Mirror Set F)

Drives 13+14 = RAID 1 (Mirror Set G)

Drives 15+16 = RAID 1 (Mirror Set H)

Drives 17+18 = RAID 1 (Mirror Set I)

Drives 19+20 = RAID 1 (Mirror Set J)


Now, we do a RAID 0 stripe across sets A through J. If drive 5 fails, then only the mirror set C is affected. It still has drive 6 so it will continue to function and the entire RAID 1+0 array will keep functioning. Now, suppose that while the drive 5 was being replaced, drive 17 fails, then also the array is fine because drive 17 is in a different mirror set. So, bo􀁆om line is that in the above configuration atmost 10 drives can fail without effecting the array as long as they are all in different mirror sets.



In Simple words


Fault tolerance is higher than in RAID 0+1, since either disk of any mirrored pair can fail without harming the volume (provided no mirrored pair loses both disks, of course). In an “ideal” failure, half of all participating disks on either side of their mirrors can fail without the volume becoming unavailable. Recovery time is also lower compared to RAID 0+1, since only one disk pair needs to be re-synced


Now, let’s look at what happens in RAID 0+1:

 RAID 0+1:

Drives 1+2+3+4+5+6+7+8+9+10 = RAID 0 (Stripe Set A)

Drives 11+12+13+14+15+16+17+18+19+20 = RAID 0 (Stripe Set B)

Now, these two stripe sets are mirrored. If one of the drives, say drive 5 fails, the entire set A fails. The RAID 0+1 is still fine since we have the stripe set B. If say drive 17 also goes down, you are down. One can argue that that is not always the case and it depends upon the type of controller that you have. Say that you had a smart controller that would continue to stripe to the other 9 drives in the stripe set A when the drive 5 fails and if later on, drive 17 fails, it can use drive 7 since it would have the same data. If that can be done by the controller, then theoretically speaking, RAID 0+1 would be as fault tolerant as RAID 1+0. Most of the controllers do not do that though

 In simple words

                Fault tolerance is lower than with RAID 1+0: one failing disk invalidates a whole stripe set, two disks failing on either side of the mirror renders the volume unusable. Furthermore, the recovery takes longer as the volume’s whole content needs to be re-mirrored to the repaired stripe




This entry was posted in RAID, STORAGE. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s