Cómo hacer un RAID1 en Ubuntu 10.04.1 server

How to create RAID using Ubuntu Software RAID. Including RAID 0, 1, 5 and 6.

Ubuntu 9.10 provides very easy way to build RAID. You can build a RAID system using Ubuntu user interface, requires no CLI anymore!

Note: Be aware of the fragile state of RAID support in Ubuntu and what it takes to get a reliable raid setup (https://wiki.ubuntu.com/ReliableRaid), but most of them has fixed since Ubuntu 8.10.


RAID is a method of using multiple hard drives to act as one, there are 2 purpose of RAID:

  • Expand drive capacity: RAID 0. If you have 2 x 500 GB HDD then total space become 1 TB
  • Prevent data loss in case of drive failure: RAID 1, RAID 5, and RAID 6. You can combine RAID 0 to other RAID, e.g RAID 0 + 1 become RAID 10.

There are 3 ways to create RAID:

  1. Software-RAID: Where the RAID created by software.
  2. Hardware-RAID: A special controller used to build RAID. RAID hardware faster, no CPU overload and can be used for any OS
  3. FakeRAID: Since RAID hardware is very expensive, many motherboard manufacturers use multi-channel controllers with special BIOS features to perform RAID. This implementation is faster than software RAID. Read FakeRaidHowto for details.

The RAID software included with current versions of Linux (and Ubuntu) is based on the ‘mdadm’ driver and works very well.


After a successful install, you should also manually fix 2 shortcomings in the default configuration:

  • Install GRUB boot-loader on second drive (this step is not need if you use Ubuntu 9.10)
  • Update startup script to detect a failed drive


Install Ubuntu until you get to partitioning the disks


Partitioning the disk

Warning: the /boot filesystem cannot use any softRAID level other than 1 with the stock Ubuntu bootloader. If you want to use some other RAID level for most things, you’ll need to create separate partitions and make a RAID1 device for /boot.

Warning: this will remove all data on hard drives.

1. Select “Manual” as your partition method


2. Select your hard drive, and agree to “Create a new empty partition table on this device ?”

ubuntu_raid_02.png ubuntu_raid_03.png

3. Select the “FREE SPACE” on the 1st drive then select “automatically partition the free space

ubuntu_raid_04.png ubuntu_raid_05.png

4. Ubuntu will create 2 partitions: / and swap, as shown below:


5. On / partition select “bootable flag” and set it to “on”


6. Repeat steps 2 to 5 for the other hard drive

As you see Ubuntu 9.10 makes RAID creation very easy. No need to define partition manually anymore! Ubuntu 9.10 also use ext4 the latest Linux file system.

Configuring the RAID

  1. Once you have complete your partitioning in the main “Partition Disks” page select “Configure Software RAID”
  2. Select “Yes”
  3. Select “Create new MD drive”
  4. Select RAID type: RAID 0, RAID 1, RAID 5 or RAID 6
  5. Number of devices. RAID 0 and 1 need 2 drives. 3 for RAID 5 and 4 for RAID 6.
  6. Number of spare devices. Enter 0 if you have no spare drive.
  7. select which partitions to use. Generally they will be sda1 and sdb1 or hda1 or hdb1. Generally the numbers will match and the different letters are for different hard drives.
  8. At this point the installation may become unresponsive this is the hard drives already syncing. Repeat steps 3 to 7 with each pair of partitions you have created.
  9. Once done, select finish.


Ubuntu 9.10 will automatically format your partitions.

Boot Loader

There are several problems reported by previous version of Ubuntu. But Ubuntu 9.10 already fixes them. In case you next HDD won’t boot then simply install Grub there.

  • #grub-install /dev/sdb #grub-install /dev/sdc

Boot from Degraded Disk

If default HDD fail then RAID will ask you to boot from degraded disk. In best practice, as specially if you put your server on remote area then you may make this automatically.

Since Ubuntu 8.10 there is a new feature to boot automatically if default RAID disk fail. Simply:

  1. edit this file /etc/initramfs-tools/conf.d/mdadm
  2. change “BOOT_DEGRADED=false” to “BOOT_DEGRADED=true”


  • Additionally, this can be specified on the kernel boot line with the bootdegraded=[true|false]
  • You also can use #dpkg-reconfigure mdadm rather than CLI!

Now, you have completed the step of making RAID in a few minutes!

Test your RAID now!

The very importing part of RAID building is … TESTING if your RAID works! Simply follow these step to test your RAID:

  1. shutdown your server
  2. remove the power and cable data of your first drive
  3. start your server, see if your server can boot from degraded disk!


Swap space doesn’t come up, error message in dmesg

Provided the RAID is working fine this can be fixed with

* sudo update-initramfs -k all -u

Using mdadm

Checking the status of your RAID

Two useful commands to check the status are:

* cat /proc/mdstat

This will show output something similar to

  • Personalities : [raid1] [raid6] [raid5] [raid4]

md5 : active raid1 sda7[0] sdb7[1]

  • 62685504 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]

  • 256896 blocks [2/2] [UU]

md6 : active raid5 sdc1[0] sde1[2] sdd1[1]

  • 976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

From this information you can see that the available personalities on this machine are “raid1, raid6, raid4, and raid5” which means this machine is setup to use raid devices configured in a raid1, raid6, raid4 and raid5 configuration.

You can also see in the three example meta devices that there are two raid 1 mirrored meta devices. These are md0 and md5. You can see that md5 is a raid1 array and made up of disk /dev/sda partition 7, and /dev/sdb partition 7, containing 62685504 blocks, with 2 out of 2 disks available and both in sync.

The same can be said of md0 only it is smaller (you can see from the blocks parameter) and is made up of /dev/sda1 and /dev/sdb1.

md6 is different in that we can see it is a raid 5 array, striped across 3 disks. These are /dev/sdc1, /dev/sde1 and /dev/sdd1, with a 64k “chunk” size which is basically a “write” size. Algorithm 2 shows it is a write algorithm patern 2 which is “left disk to right disk” writing across the array. You can see that all 3 disks are present and in sync.

* sudo mdadm –query –detail /dev/md*

( where * is the partition number)

Disk Array Operation

Note: You can add, remove disks or set them as faulty without stopping an array.

1. To stop an array, type:

  • $ sudo mdadm –stop /dev/md0
  • Where /dev/md0 is the array device.

2. Remove a Disk from an Array

  • $ sudo mdadm –remove /dev/md0 /dev/sda1
  • Where /dev/md0 is the array device and /dev/sda is the faulty disk.

3. Add a Disk to an Array

  • $ sudo mdadm –add /dev/md0 /dev/sda1
  • Where /dev/md0 is the array device and /dev/sda is the new disk.
  • Note: This is not the same as “growing” the array!

4.Start an Array, to reassemble (start) an array that was previously created:

  • $ mdadm –assemble –scan
  • mdadm will scan for defined arrays and start assembling it. Use this to track its status:
  • $ cat /proc/mdstat


Thanks to Ubuntu 9.10 team which makes the RAID building very easy.

Fuente | Help.ubuntu.com

Segundo Howto: http://blog.foobaria.com/2010/05/installing-ubuntu-1004-desktop-with.html



Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s

A %d blogueros les gusta esto: