[plug] Re: drive issues

Jon Miller jlmiller at mmtnetworks.com.au
Wed Apr 9 19:38:45 WST 2003



Jon L. Miller, MCNE, CNS
Director/Sr Systems Consultant
MMT Networks Pty Ltd
http://www.mmtnetworks.com.au

"I don't know the key to success, but the key to failure
 is trying to please everybody." -Bill Cosby



>>> craig at postnewspapers.com.au 6:33:46 PM 9/04/2003 >>>
> I should have been more clearer about what I posted but I though this may help.  the important issue is multiple devices and queuing of instructions.

I'm well aware of the present and future limitations of ATA and SCSI as
interface formats. This machine does not need high disk performance, but
instead lots of capacity, capacity which would be prohibitively
expensive with SCSI drives. The machine does need RAID (every server
does) but for a variety of reasons doesn't yet have it.

> This scenario is common in desktop computers where you connect a single device to a single adapter and perform data transfers. There is practically no difference between the two interfaces, this holds for bandwidth as well as resource usage (CPU) as both interfaces use the most efficient way to transfer data, namely DMA. This means that there is no point in purchasing a generally speaking more expensive SCSI based system when the cheaper ATA interface would do an equally good job. 

Not quite - SCSI drives tend to be 10k RPM not 7.2k RPM, have bigger
buffers, and be much more well tested. Then again, unless you like
getting 36g for the price of 200g ATA storage, its not for you.

JLM> Depends on your needs, today most places needs both capacity and grunt, especially if the network is larger than 10 people.  It comes down to the data that is stored on the server (if all data is document (ie, word processing, spreadsheets of reasonable size) fine a slower server would be ok.  The real problem comes when there are grahics being read or written.  If the server is a IDE susbstem, then this has (as I've seen it myself) caused the server to perform poorly due to the bandwidth being chewed up but the file size.  Granted the file is broken up into x number of pieces but it is a lot of those pieces that sucks up the bandwidth in performing the breakup of the data so it can be written to the disk or loaded into memory.

> Connectivity: The ATA interface can only address two devices while SCSI can address eight devices (Narrow SCSI), 16 devices (Wide SCSI), 32 (Very Wide SCSI) or 126 (FireWire). There are also many peripherals available to SCSI only and not ATA. 

True. I've never has more than 8 drives in a box, and don't plan to
either. SCSI becomes important for /BIG/ stuff, but a small server
should have no need. I do think that RAID of some form is important, and
until recently the only real hardware RAID was SCSI based. Thankfully
for our wallets, this is no longer the case. Ever looked at the price of
3 120 gig SCSI drives intended for RAID 5 use, knowing that they could
be 4200RPM for all you care performance-wise? Some applications don't
/need/ lots of grunt from the disks, just huge capacities.


JLM>  Some of the servers I've had to build when I worked for an large IT firm had as many as 12 or more hot swap drives and 2-4 DLT units in the one box.  On another system it was a SANS unit with 1.3TB of 18GB drives in it with 4-6 PSU and this was hooked up to a server with 12 drives in it. Again it comes down to what the needs are.  This unit was running multiple RAIDS (stripe and redundancy) and it was hooked to the server via a fibre channel card.  even with huge capacity drives the server has to be able to deliver the data in a timely manner.  If it's just huge amount of online storage capacity then a SANS would be the answer with Gbit data transfer rate.  Price is always the issue, but bottom line is you get what you pay for.

> Bandwidth: The demand for high transfer rates in servers can not be met using current ATA interfaces based on the two devices per adapter limit and even if it could carry more devices there simply isn't enough bandwidth and flexibility available for serious server application. 

Actually, one ATA drive can't possibly flood the ATA bus, and you just
use multiple buses with one drive per bus. Under 99% of cases 2 drives
can't flood ATA133 either.

JLM> wouldn't that depend on what is being stored or read and the number of instructions in queue, amount of memory, etc?  I've seen poorly designed servers come to a halt when data is being saved to it in an IDE subsystem.  I supported a hotel that had a very poorly designed IDE server as their main server and everytime a number of transactions would happen there was this huge wait time and then it would free itself.
Once the subsystem was changed this problem went away.

SCSI can only have one command or data transfer on the bus at a time,
and can be inefficient with many drives. As I'm finding with my new SATA
raid controller on another box, this is /not/ true with independent SATA
buses.

JLM>  But there is no advantage to using SATA currently since the bus is PCI and the drives are still ATA100 campatible, why pay more for the same?  I'll wait for the subsystem to change.

> Efficiency: The ATA devices lack the intelligence to perform command queuing as well as their SCSI counterparts which can queue up to 256 commands per logical unit. SCSI hard disk drives aimed at the extreme performance server market have had a lot of research and development time on optimizing seek patterns and rescheduling commands to minimize seek times and maximize throughput. This may not be evident by looking at desktop benchmarks but under heavy server loads, this is evident.
> Also, SCSI hard disk drives generally tend to be designed to work well in RAID-systems where I/O load is spread across multiple drives. 



> Dependability: Most high-end SCSI hard drives are quite expensive but there are good reasons for it. They can sustain higher temperatures and stay mechanically functional despite the expansion of the metal parts with temperature and and generelly have better build quality. The net result is that they are the natural choice for enterprise server applications. Connectors suitable for hot-swapping drives in RAID-systems is something only SCSI boasts, and helps maintaining large disk arrays where down-time is unacceptable.

SATA supports hot-swap quite nicely. Tried it out the other day on my
new dual Xeon box's RAID controller.

JLM> So I've heard, but give me a server with Hot Swap and several Hot Swap PSU and Hot Swap NIC and we can rock and roll.  Once you've worked on one it pretty hard to accept anything less that this (IMO).  But like the man said "champagne taset on beer pocket money".







More information about the plug mailing list