[plug] large swap space and raid array

Denis Brown dsbrown at cyllene.uwa.edu.au
Fri Jan 30 10:28:45 WST 2004


Thanks for those thoughts, Craig.

At 23:55 29/01/2004 +0800, you wrote:
>On Thu, 2004-01-29 at 23:47, Craig Ringer wrote:
> > On Thu, 2004-01-29 at 23:37, Denis Brown wrote:
> > > Okay, split up the swap into different partitions across different disks!
> > > Err... yes, but here I have all my "disk eggs" in one RAID array 
> "basket."
> > > So I have this gigantic SCSI drive staring out at me, all /dev/sda of it.
> >
> > That's actually going to be a problem.
>
>Sorry. I should perhaps say "that, depending on how much swapping you
>actually do, could be a performance problem."

As you say, with sensible limits I might be okay.   In many ways I suppose 
it's a case of trial and error - as long as data is safe, I can implement 
alternative schemes until the right combo appears.   I'll also check with 
colleagues in the imaging field to see what they do about this.   The SGI 
users would have LVM native anyway, iirc.

The ideal might be to implement a totally sep. drive for swap - there is a 
spare IDE channel going begging, now that I think of it...

> > > Given the sheer amount of physical memory, is a
> > > smaller swap a safe choice (ie.2GB?)
> >
> > Maybe ... but there's no harm in having lots anyway.
>
>BTW, my server at work has 2GB of RAM, and it's barely touched swap in
>it's history. I set up sensible ulimits for users, so a runaway process
>can't thrash the swap, and it works wonderfully. Of course, I'm working
>with many light tasks, not a few heavyweight memory hogs so it's a
>rather different situation.
>
>BTW - just out of curiosity - what's the hardware spec on the machine?

IBM x235; dual power supplies, more fans than I've seen in Hardly Normal's, 
with dual 2.8GHz Xeons, 2.5 GB RAM, ServeRAID 6i SCSI Ultra320 controller, 
three by 146.something GB U-320 10Krpm drives (server spec, not DeathStars) 
and three Gb NICs.   The triple NICs will fire one each into two 
workstations and the third connects to the outside world.   1.5KVA UPS on 
the server and workstation.

Workstations are also pretty spec'y but less concentration on local disk 
storage, single CPU.   The servers will do file serving and batch-mode 
image analysis, along with running a database for the study data.   The 
workstations' duties mainly involve a lot of manual "cleaning" of MRI scan 
image data, normalising the data to anatomical landmarks (so all the brain 
images share a common orientation and baseline) and displaying the 
results.   As well, there will be the usual desktop applications and 
vanilla statistical packages, etc.

It's all a long way from the Xerox 820 word processor we started out with 
here :-)

What about backup?   Harumphf!   I wanted an LTO autoloader system directly 
attached - quite cheap at govt/educational discount rates - but that got 
knocked down.   So we're going to pipe data across a 100Mb network to a 
shared Windows-based system with attached LTO.   Yeah, right.   I'm also 
going to link the two systems together (one at Royal Perth, other at 
Fremantle Hospital) using Coda or somesuch so that in the early days, 
before the data volume grows too large, I can implement mutual off-site 
storage.   Have to throttle the link of course and run it at midnight 
:-)   There is only a 10Mb link between the hospitals.   Does that redefine 
the term "network saturation" I wonder?  :-)

If they get into fMRI (functional imaging) then the data volumes per 
subject just become unbelievable - terabytes of the stuff overall.   Now 
that could get interesting from a data management perspective.

Cheers,
Denis






More information about the plug mailing list