[plug] SATA & Debian
Craig Ringer
craig at postnewspapers.com.au
Thu Jan 15 01:18:10 WST 2004
I'll respond with a few comments, but it's late and I'm tired (can't
sleep, as usual). Of course, I'm no expert at the best of times.
Tim Bowden wrote:
> 1. SATA looks like it is no longer a goer. Thanks for the advice
> Craig. If I can get about the same performace from PATA with software
> raid then that's good enough.
It's likely that you will; I suspect your bottlenecks are more likely to
be the drives or PCI bus than the ATA interface. CPU and memory may also
play a big part if you're using software RAID, but I haven't used it
enough to really say.
> 3. I realise the lan will be something of a bottleneck but nothing is
> going to be done about that yet. Most of the computers on the network
> are laptops, so the file transfers tend to be copying entire job
> directories to and from the server (up to 1 gig for the larger jobs but
> usually less than 50mb).
A single laptop is often capable of maxing a 10/100 ethernet connection
- my laptop's disk is capable of ~16MB/s and it's 4200rpm. That said -
it all comes down to how much it /really/ matters. After all, if you
wanted to eliminate the network bandwidth bottleneck you'd need:
- a 10/100 core switch with at least one gigabit port
- a gigabit NIC in the server attached to PCI-X, 66/64 PCI, or directly
to the southbridge
- seriously fast disk storage, especially if being accessed by multiple
clients at the same time. Wouldn't be surprised if you needed at least 4
drive RAID5 or RAID10.
Somehow, I doubt it'll be worth the money.
> I am planning on using case fans to
> maximum effect.
You'll get at least once pre-installed on a good case. My home Lian Li
fulltower has 2x80mm filtered intake fans blowing across the drive cage
and two 80mm exhaust fans, in addition to the PSU. As my lounge gets
_really_ hot in summer, this is a good idea - and I usually disable the
rear exhaust fans for winter.
> is the standard cooling that comes
> with Intel & AMD box sets good enough for 24*7 operation?
I've never had an issue with them. After all, AMD and Intel don't want
warranty returns due to early failure of a CPU, do they? I suspect the
standard coolers only become an issue for overclockers or people
installing in apallingly ventilated environments.
> 5. While serving files will be the main use of this machine it will
> also double as an internal web and database server (& perhaps mail).
> Eventually this will start to put a significant load on the disks as use
> and content grows so HDD speed is important.
It depends a lot on the nature of the load. I have little experience
with the load types you're talking about, beyond the initial heavy
testing I did before deployment and occasional special conditions (full
system network backups, etc) with the big server at work. I have a few
ideas but do take them as just that - ideas, and second hand advice.
If you'll be streaming data from it to clients, doing database access,
random access from the web server, /and/ mail, I suspect you are likely
to hit performance issues (read latencies on random reads mainly) with
any single volume drive setup. Certainly if you're doing all of this
fairly heavily. If your mail, DB, and web access is only light or your
streaming I/O is relatively infrequent then it may not be such an issue.
I've found that while our server at work performs fine with one or two
heavy read/write streams, as soon as we go above that the performance
begins falling significantly. Essentially it looks like the (3 disk)
RAID array has to start seeking around serving requests in turn -
certainly the total throughput falls significantly with each new I/O
stream above two or three. Additionally, if I try any kind of random I/O
(say heavy IMAP access from several clients) while doing streaming I/O
the random I/O performance can be apalling. My RAID5 write performance
is also _very_ poor compared to read performance - I hear that using 4+
disk RAID5 instead of 3 disk helps this issue a lot, though. Because of
this, I now keep our mail, databases, etc (/var basically) on the RAID1
"OS" volume and the big stuff on a separate RAID5 volume. With that
setup, I get a responsive OS, apps, db and mail server even when the
RAID5 volume is being thrashed by Production.
You can go right up to a system with a RAID1 volume for OS, web, DB, and
mail plus a 4xRAID5 or RAID10 setup for streaming I/O - but are you ever
likely to need it? Will it be worth the money?
It really depends on when /eventually/ is, how clear an idea you have of
your expected loads, whether you will want a hot-spare later, what your
budget is now, etc. Perhaps that 2 disk RAID1 7200RPM system will be
good enough that it's not worth spending the significantly greater
amount of cash on a more "serious" machine. The POST ran well enough on
a single 1.2GHz Athlon with an 80GB 7200rpm disk (and good tape backups)
for a long time. Our move to all-digital publishing and thin clients
ended that one, though, as our storage needs went through the roof as
did our CPU and memory requirements on the server.
> 7. HDD's. My preference is for Seagate drives going off my own
> experience and comments from the list in the past, but if I'm going to
> go with PATA with 8mb cache then WD JB's are the most economical. What
> sort of experiences have people had with the latest Western Digital JB
> drives? I know WD went through a patch a while back where you wouldn't
> want to touch on of their drives but perhaps that was too long ago now
> (thinking back it may have even been four or five years ago.
I went through a massive rash of failures in them about a year ago and
haven't bought any since. That said, my experience appears not to have
been widespread and I may have bought a bad production run.
My core server at work is very happy running on Seagate Barracuda V
drives, and I get great results at home with a pair of Maxtor 120GB 8mb
cache drives. Both significantly outperform the single surviving (older)
WD JB I still have in my home machine.
Craig Ringer
More information about the plug
mailing list