[plug] Raid 0 software? hardware?

Jon L. Miller jlmiller at mmtnetworks.com.au
Sun Apr 3 07:55:58 WST 2005


Craig, I find your comments interesting regarding WD vs Seagate, personally my 
experience is the exact opposite.  WD drives are still running my NetWare server 
after 5 years and certain Linux boxes, yet the Seagates seems to fail either just 
after 1 yr or thereabout.

JLM

On 2 Apr 2005 at 23:02, Craig Ringer wrote:

> On Sat, 2005-04-02 at 20:32 +0800, Cameron Patrick wrote:
> 
> > I believe there's a "dmraid" driver which provides a layer on top of
> > the standard Linux software RAID system, allowing it to access data
> > stored in the proprietary format that the "RAID" card firmware uses.
> > I've never used it so I'm not sure how it compares to pure Linux
> > software RAID, but I'd imagine it would be as least as good as any
> > drivers supplied by the card vendor.
> 
> If it's anything like the older 'md' driver, I imagine it'd be pretty
> good.
> 
> > > I'm also kind of concerned that you mention that you're using RAID 0. 
> > > RAID 0 across four disks is a death sentence to your data, even with 
> > > high quality "enterprise" SCSI/SATA disks. With "consumer" level 
> > > ATA/SATA disks I'd expect total loss of all data on the array within a 
> > > year - at least going by my luck with disks.
> > 
> > It's worth noting that you should keep backups even if you are using
> > one of the "reliable" RAID levels. 
> 
> I couldn't agree more. Double disk failures happen. People's servers get
> stolen or catch fire. OSes go insane and decide that writing gibberish
> over the disk is fun. Filesystems get corrupted. RAID controllers fail
> and start corrupting writes. RAID won't save you from any of these.
> 
> > However losing 1 in 4 drives in a
> > year sounds a lot worse than my luck with cheapo ATA drives.
> 
> I agree - I still don't understand it but it just keeps on happening.
> I've had particularly terrible results with Western Digital disks. I've
> had three 120GB WDs at home - two of which have died. One was a warranty
> replacement for a previously failed WD 120GB. Each failure happened in a
> different system and both were well cooled.
> 
> I've had so many WD disks die at work that I just call Austin and say
> "got another dead one, I'll send it in with the courier for the next
> order". Of the original 3 disk 120GB WD RAID array I had three fail (I
> was later able to confirm I got a defective batch of disks). The server
> now has an array of five 250GB WDs in it (4xRAID 5 + hot spare). I think
> I've lost three of those over time now. The server is well cooled and
> the disks are not under exceptionally heavy loads, so unless vibration
> is killing them (and there's not that much vibration...) or something I
> just don't know what could be going on. The 2x80GB Seagate disks the OS
> RAID 1 sits on have been dead reliable the entire time, as has the
> Maxtor in my home machine.
> 
> So ... with a track record like this, yes, I do assume disks will die
> and die quickly. This track record on disk reliability isn't confined to
> any particular system or site and doesn't appear to be to do with
> cooling or power problems. The only pattern is disk manufacturer.
> 
> Even with "normal" disk reliability, I wouldn't be too surprised to see
> one disk out of four die within a year. It's certainly not the sort of
> chance I'd risk anything I cared about even slightly on. I'd be
> reluctant to use RAID 0 even across "enterprise" SCSI disks, frankly ...
> it's just asking for trouble. RAID 0 is great for things like giant
> working areas for video editing, but I wouldn't use it to store anything
> important, ever.
> 
> -- 
> Craig Ringer
> 
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://www.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.linux.org.au





More information about the plug mailing list