[plug] RAID Setup

Phillip Bennett phillip at mve.com
Wed Apr 9 17:14:26 WST 2008


>----- Original Message ----- 
>From: "Craig Foster" <Craig at fostware.net>
>To: <plug at plug.org.au>; "Phillip Bennett" <phillip at mve.com>
>Sent: Tuesday, April 08, 2008 4:46 PM
>Subject: RE: [plug] RAID Setup
>
>> -----Original Message-----
>> From: plug-bounces at plug.org.au [mailto:plug-bounces at plug.org.au] On
>> Behalf Of Phillip Bennett
>> Sent: Tuesday, 8 April 2008 7:49 PM
>> To: plug at plug.org.au
>> Subject: [plug] RAID Setup
>>
>> Hi everyone,
>>
>> I'm looking into building a RAID setup for our new Virtual Machine
>> host.
>> I'm trying to condense around 4 or so servers into one box with Xen,
>> using
>> ATA over Ethernet as the storage solution.
>>
>> I've found a great case and setup to use (via http://xenaoe.org) that
>> houses
>> 15 drives.  I can fill it up for very little cost, but then I'm
>> wondering
>> what kind of RAID to use...  We don't have huge space demands and I'm
>> sure
>> anything will be faster than our current storage solution.  However, I
>> *would* like some decent speed and redundancy.
>>
>> So: How would you set something like this up?  RAID5, or maybe some
>> groups
>> of RAID0+1 or RAID 1+0?  What would give me the best speed as well as
>> resiliancy?
>
>RAID5+1 is our current choice for both speed and resiliency.
>Software RAID5 on hardware mirrored Initio or 3Ware SATA RAID
>controllers (with a hot spare on each controller).
>
>R5 in case a controller fails, and put the R1 on hardware (not that crap
>firmware) so mirroring is handled by accelerated/ dedicated processors.
>
>RAID5 by controllers means adding more space is simply add 2xHDD + 1xHS
>HDD + 1xR1Controller and expand the volume over the new "drive".
>
>Hot spares are a good choice as failures never seem to be nice enough to
>wait until a good time :P
>
>I'm still open to suggestions on better options on RAID controllers as
>3ware is expensive, initio isn't normally carried by every computer
>wholesaler, and Adaptec (rings bells with PHBs) are crap at the moment.
>
>> I was thinking of starting with 15 disks, using two for hot spares,
>> leaving
>> 13 disks between 2 RAID sets - one of 8 and one of 5 disks.  The
>> smaller set
>> would be RAID5 and used to host the virtual machine images and some
>> less
>> used files (PC images, drivers etc..) and the larger set would be for
>> our
>> company data.  I'm just at a loss for how to do it best.  At the price
>> I can
>> get the parts, I was actually thinking of mirroring the 15 disk
>> enclosure
>> entirely (ie Software RAID1) to shut the boss up about a single point
>> of
>> failure.
>
>Remember mirror across multiple resources, meaning put in a NIC and
>either a dedicated switch or crossover cable for each mirrored AoE
>enclosure. NICs and switches die as well.:P
>
>>
>> Does anyone see any problems with this or have any suggestions?
>>
>> Thanks in advance,
>> Phil.
>>
>
>CraigF.

Thanks for the info guys,

I've given this some thought and have the following (final?) setup in mind 
for you to poke holes in. :)

Disk Host:  (There will be two of these, both identical)

* Running Linux on a small form factor board inside the disk enclosure. This 
machine will be doing nothing but hosting the disks.  The VM Host will 
access these disks as ATA over Ethernet devices.  There will be four bonded 
ethernet ports for speed and reliability. 2 ports onboard, two on a second 
card, more resiliancy.

* 2 x Hardware RAID5 across 6 500GB disks with a hot spare , giving a total 
of (2 x 2.5TB usable) in two RAID5 arrays.  Given 8 ports per RAID 
controller, assume two controllers, one for each array.


VM Host: (Only one for now, but more to come later on)

* ATA over Ethernet to the disk host.  Use Software RAID1 to mirror for 
redundancy.

* There will be 2 bonded (Gbit) ethernet ports for speed.  Hopefully this 
will be fast enough.

*********

I figure this would give the ultimate reliability.  If a disk goes, the RAID 
card will flag it and rebuild it.  If a controller goes, the other enclosure 
will take care of it.  If a whole enclosure goes, the other one will be 
fine.  If the VM host dies, I can boot the VMs on a seperate machine.

What do you think?  Is there going to ba any huge bottlenecks I'm not 
seeing?  Will the software RAID1 be good enough?

IMPORTANT: I'm going to be using GFS2 as the file system so that more than 
one server can access the volumes at once (still as software RAID1).  Will 
this cause a problem for the RAID1?

Thanks in advance,
Phil. 




More information about the plug mailing list