[plug] 2.4.24 mremap root exploit

Craig Ringer craig at postnewspapers.com.au
Thu Feb 19 07:31:52 WST 2004


On Thu, 2004-02-19 at 06:46, William Kenworthy wrote:
> This is the second 2.6 Ive tried, and the slowest yet (2.6.2 with mm
> patches, other was vanilla).  It very much depends on the type of
> loading, but for jobs that take a week or more at 100% cpu, an extra
> couple of days is a problem.

I can see how that could be an issue, yeah.

Hmm... well, we don't really do many (any, beyond compiles, really)
seriously large CPU-bound batch jobs here, so I don't expect to notice a
significant difference one way or the other. The core server here is
more affected by multitasking efficiency and I/O performance.

Our I/O stalling issues under heavy write loads still appear to be
present. Writes appear to be somewhat faster and generally bog the
system less, but still cause long delays in read service. As I gather
2.6 has pluggable I/O schedulers, so some investigation may be needed
there.

The real issue is that I'm seeing long read service delays on a
different disk array to where the write is happening - and I just don't
think there's an excuse for that. If I'm writing heavily to /home, app
launch times go through the roof if the app isn't cached in memory -
despite the fact that /usr etc are on a separate array. We're talking
waiting for an xterm to appear, here. Even listing /tmp took 5 seconds
while I was writing a 2GB dummy file to /home. 

Oddly, it doesn't seem to be as bad the other way around - writing to
the RAID1 array while reading from the RAID5. I guess it could be a
3ware driver issue not a core kernel issue - I don't have any other
storage devices to test with. It's not as bad under 2.6 as it was under
2.4, though - bad, but no longer absolutely awful.

WRT to the write speedup, BTW, further testing indicates that the kernel
must be buffering writes in memory now. A 1GB write to our RAID 5 array
(dd if=/dev/zero of=test bs=1M count=1000) took about 8 seconds to
complete, which is just absurd. Gkrellm (and continued poor read
performance) indicates that the write continues for a fair few seconds
after dd terminates.

I'm currently playing with Bonnie++ to see what thrashing the system
turns up. As it's running live services, the benchmarks won't be fair,
but should be somewhat informative nonetheless.

Craig Ringer




More information about the plug mailing list