[plug] large swap space and raid array
Denis Brown
dsbrown at cyllene.uwa.edu.au
Thu Jan 29 23:37:23 WST 2004
Dear PLUG list members,
Basic rule of thumb regarding Linux swap space is to have 2x the amount of
physical memory, acording to the HOW-TO's. If multiple disks are
available, swap can be split across disks. I have a system with 2.5G
physical memory so theoretically I should be running about 5G of swap.
Some threads on Debian developers list mid last year talk of the un-wisdom
of >2G and of the auto-RAID-o'ing of multiple swap partitions.
But, the 2.4 kernel is supposed to baulk at >2G and in fact artifically
limit any larger space to 2G anyway. So if I set up a 5G swap, I'd waste
3G. No big deal, but if I don't have to, don't have to.
Okay, split up the swap into different partitions across different disks!
Err... yes, but here I have all my "disk eggs" in one RAID array "basket."
So I have this gigantic SCSI drive staring out at me, all /dev/sda of it.
I'm getting a feeling in my water that LVM as mentioned by Craig Ringer
and others on this list, might be the sane way of going. But I didn't
want any extra complications :-) Kernel 2.6 maybe, but I'm a stability
freak and 2.6 might be a little bleeding-edge for my liking. Not seen
any ref's yet for swap size limits under 2.6 but that may just be 'net
myopia on my part.
Thoughts appreciated and TIA,
Denis
PS: application is medical imaging with delightfully large memory- and
cpu-hungry processes. Three drives in RAID-5 giving 280GB all up plus
another 140GB invested in the parity. ATM is a write-back cache scheme
and this is probably safe as I have a grunty UPS on each of the major
boxes. Normal file system choice is ext3 (hopefully never need the
journal in anger). Given the sheer amount of physical memory, is a
smaller swap a safe choice (ie.2GB?)
More information about the plug
mailing list