[plug] RAID Setup

Patrick Coleman blinken at gmail.com
Tue Apr 8 23:40:19 WST 2008


On Tue, Apr 8, 2008 at 7:48 PM, Phillip Bennett <phillip at mve.com> wrote:
> Hi everyone,
>
>  I'm looking into building a RAID setup for our new Virtual Machine host.
>  I'm trying to condense around 4 or so servers into one box with Xen, using
>  ATA over Ethernet as the storage solution.
>
>  I've found a great case and setup to use (via http://xenaoe.org) that
> houses
>  15 drives.  I can fill it up for very little cost, but then I'm wondering
>  what kind of RAID to use...  We don't have huge space demands and I'm sure
>  anything will be faster than our current storage solution.  However, I
>  *would* like some decent speed and redundancy.

As it happens, I'm setting up something similar at work at the moment.
The plan (mentioned it in the iSCSI talk I gave a few months ago) is
to have two 24-1TB-drive servers, mirroring over the network using
DRBD. We've bought one of them, and so I took the opportunity to do
some benchmarking of the different combinations of RAID levels.

The disks were 1TB Western Digital SATA2 HDDs, as found at:
<http://tinyurl.com/5k655n>. Note after reading around I probably
wouldn't buy these drives again, I'd probably try the 1TB Samsungs.
These are connected to three 3WARE 9650SE-8LPML RAID controllers.
Again, for the next system I'll probably get Areca controllers over
these as the performance is (apparently) better.

One aim of the system is to be resilient to a controller failure. The
arrays mentioned below are constructed across the three controllers;
eg. a 3-drive array takes one disk from each controller, a 6-drive
array takes 2 disks. A 2-drive array is arranged so a each disk of an
array is on a different controller.

The system is a dual-quad-core 2.33GHz Xeon with 8GB of DDR-667 RAM.

I was testing with sequential writes to the block device direct, much
larger than the memory of the system (8GB) to eliminate the page
cache, though it appears this isn't used anyway when you're not
writing via the filesystem. Command used was:

dd if=/dev/zero of=/dev/md1 bs=1M count=50000

which should write 50G of zeros to the RAID device being tested. I
tried testing with software that does random writes for a more
real-world benchmark but got some weird results.

The system was Debian Etch, with a stock 2.6.18 distro kernel. I tried
a custom 2.6.24 kernel but got similar or slightly lower write speeds.

The 'storsave profile' on the RAID cards was set to 'Performance' for
most of these tests, which disables the write-cache journal thus can
lead to data loss if you lose power to the server (even with a battery
on the RAID controllers). You probably wouldn't want to do this on a
proper production system. Reenabling the write journal dropped the
speeds by about 10-20% iirc.

All this RAID was software RAID; I didn't benchmark hardware as it
didn't allow us to handle an entire controller failure (3ware doesn't
let you RAID across controllers on this card).

Anyway, the numbers:

6-drive RAID6:  185-208MB/s write
RAID6+0 (stripe of four 6-drive RAID6 arrays): 320MB/s write
3-drive RAID5: 113-114MB/s write
RAID5+0 (stripe of eight 3-drive RAID5 arrays): 303MB/s write
24-drive RAID: 890MB/s
2-drive RAID1: 70MB/s (ish, don't have that figure here)
RAID1+0: 825MB/s (ditto).

So basically, if you want the speed and can sacrifice half your raw
storage (a sticking point with my boss atm :) go with RAID1+0.

Also interesting is how RAID6 is slightly faster than RAID5; I would
guess that there is a positive tradeoff from having more spindles in
the RAID set vs increasing the number of parity calculations.

I did test RAID 0+1 (mirror of two 12-drive RAID0 arrays) but didn't
write it down. Speed wasn't fantastic from memory.

Cheers,

Patrick


-- 
http://www.labyrinthdata.net.au - WA Backup, Web and VPS Hosting



More information about the plug mailing list