[plug] gigabit lan

Adrian Chadd adrian at creative.net.au
Fri Jan 11 12:49:22 WST 2008


On Fri, Jan 11, 2008, Kev wrote:

> >Cat 6 has better noise rejection than Cat 5 and 5E.
> >  
> 
> Cat6 now in use - no change in performance.

Well, unless you're rolling massive bundles of ethernet and/or
you're operating it in a very electrically noisy environment,
cat6 won't buy you anything. Properly terminated Cat5e will run
gige fine over its rated distance. I do this at home, >15 metre
runs.

Cat6 isn't even shielded, mind, its just twisted pair. The next
step up - the stuff running 10GE - thats individually shielded
pairs IIRC..

> One previous suggestion might be a problem though, and that's the pci 
> bus.  Both machines have AGP graphics (pci bus), onboard sound (pci bus) 
> and all the hard drives are PATA, which I /think/ is on the pci bus 
> also, isn't it(?).  The on-board nics are also on the pci bus I 
> believe.  The mobo with the nVidia nic is a LanParty nf3 Ultra-D with an 
> AMD Athlon64 3200+ CPU and 2gig of RAM.  The other machine (Marvell nic) 
> has an Asus_A7N8X-E_Deluxe mobo with an AMD Athlon 2500+ CPU and 1gig of 
> RAM.
> 
> You've all been very forthcoming with helpful info, which will take me a 
> while to work my way through, particularly given that I'm also involved 
> in moving to Albany at the mo.  I'll be off the air spasmodically, but 
> will eventually get back with results - if any.

Remember to start with standard experimental design type logic.
You want to vary single items during test and make sure you're not
suffering from hidden variable type problems.

How I approach network throughput issues:

* Check cabling ends, but unless you have L1 testing equipment you're SoL;
* You could try swapping cables or switches, putting things PtP rather than
  involving a switch, but all you can measure at this point is "better than
  before" rather than "whats the maximum I can get"; and i've seen crazy broken
  ethernet cards that spoke fine via a gige switch but not when directly
  connected, so you need to test both!
* Run iperf or something similar which is about as low to the metal as you're
  going to get. Monitor CPU and interrupt usage; make sure the CPU
  is maxing out during the test. If you're maxing out your CPU during the test
  then you may not be hitting maximum network throughput on your network
  setup;
* Try to eliminate CPU as a bottleneck. Check error counters (netstat -in;
  ifconfig; other commands where applicable) and see if there are any errors.
  They can hint at cabling or switch misconfigurations;
* Once you've controlled for CPU and OS-noticable errors, you should be aiming
  for:
  + 300-400mbit FDX on standard PCI-33mhz/32 bit, with large frames.
    Its roughly 35,000 packets a second each way (so 70,000 pps aggregate.)
    Having a managable switch here helps. :)
  + Double that for 66mhz, full rate for faster PCI busses
* If you're not seeing the above speeds and you're not maxing out your CPU,
  then:
  + Could be cabling;
  + Could be switch;
  + One of the two test machines can't hack the pressure :)
* So at this point you've got a baseline that doesn't involve errors or a maxed
  out CPU, so -now- I'd play with cabling and switches more thoroughly.
* You can setup iperf to do full duplex and half duplex tests - so test iperf
  in both directions to see if one direction is significantly different to the
  other. That might give you clues about which end is artificially constricting the
  throughput.

Thats a good place to start. Change single things at a time. Realise that
your NICs are all potentially onboard pieces of crap. I keep some intel gige NICs
here specifically to deal with crappy onboard motherboard NICs.




Adrian




More information about the plug mailing list