<div dir="ltr"><div>heh yeah LACP/balance are more trouble than their worth<br><br><a href="http://streakwave.com.au/store/ubiquiti-networks/edgeswitch/es-24-250w-edgeswitch-24-gigabit-ports-af-at-24v-poe-250-watt-rackmount">http://streakwave.com.au/store/ubiquiti-networks/edgeswitch/es-24-250w-edgeswitch-24-gigabit-ports-af-at-24v-poe-250-watt-rackmount</a><br><br></div>their pretty cheap btw, not quite sub 500 tho<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 17 November 2014 20:02, Patrick Coleman <span dir="ltr"><<a href="mailto:blinken@gmail.com" target="_blank">blinken@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hey Paul,<br>
<br>
It's been a while since I configured this, but I wanted to elaborate<br>
on what Brad was saying before you go out and spend any money -<br>
<span class=""><br>
On Mon, Nov 17, 2014 at 10:55 AM, Paul Del <<a href="mailto:p@delfante.it">p@delfante.it</a>> wrote:<br>
> I kept it pretty simple because as long as it's faster than a single port I<br>
> would be happy<br>
> If I can get the speed of 2x gigabit ports that would be excellent<br>
<br>
</span>You won't get this with balance-rr and a cheap switch. balance-rr != LACP:<br>
<br>
- balance-rr just blats _outgoing_ packets from the server round-robin<br>
over the bonded link; LACP is not used. For 2x1Gb/s links you will<br>
achieve 2Gb/s _outbound_; the switch will be terribly confused by this<br>
as it will see the same MAC on several interfaces, and cheaper models<br>
will probably just dump all traffic destined for the server down one<br>
of the bonded links only. Even cheaper switches (and some expensive<br>
ones) will get really confused and you'll start losing packets.<br>
<br>
On a decent managed switch you should be able to configure some sort<br>
of algorithm based on a hash of the source/destination and TCP ports<br>
so (for traffic towards the server) each _TCP session_ is switched<br>
down a different bonded link.<br>
<br>
You will need to configure this manually on both the server and (if<br>
you require inbound traffic balancing) the switch. The key advantage<br>
to balance-rr however is that the switch doesn't have to be involved;<br>
if you only require outbound traffic balancing it sometimes works well<br>
even with unmanaged switches.<br>
<br>
If a link fails and one side doesn't notice - the classic case being a<br>
fibre media converter between the two ends - traffic will continue to<br>
be blatted out that interface and every second outbound packet will be<br>
lost.<br>
<br>
This gets you 2Gb/s outbound from the server, and at most 1Gb/s<br>
inbound for a given TCP session (eg. file download from a NAS, but<br>
you'll be able to run 2x 1Gb/s downloads simultaneously).<br>
<br>
- LACP is the Link Aggregation Control Protocol, key word here being<br>
protocol. The idea here is that you put the Linux bonding driver into<br>
802.3ad mode[1] and then it sends discovery packets down the bonded<br>
interfaces. The managed switch on the other end detects this, and if<br>
LACP is enabled an aggregate link will be negotiated between the<br>
server and the switch. You don't need to manually specify the<br>
interfaces on the switch end, and if one link fails both sides will<br>
agree to not transmit data down that link. This is convenient if<br>
you're cleaning up the server room and need to reroute cables (I kid,<br>
I kid, that's what active/backup application failover was invented<br>
for).<br>
<br>
The algorithm on both ends is generally a hash on the<br>
source/destination and TCP ports as above, so in both directions you<br>
will get 1Gb/s for any given TCP session, but if you have several TCP<br>
sessions they will in most cases be balanced across both links.<br>
<br>
So, the key point here is that the only thing that will get you 2Gb/s<br>
in every case is a 10Gb/s network adapter :) LACP, balance-rr etc were<br>
designed for a server with several hundred clients - with a few<br>
hundred un-natted TCP connections LACP will (generally) balance<br>
traffic nicely across a pair of 1Gb/s links in both directions (lots<br>
of qualifiers in that sentence ;).<br>
<br>
I wasn't kidding on the 10Gb/s adapter front either - a pair of PCIe<br>
adapters (one for your desktop, one for the NAS) + transceivers + a<br>
fibre lead may well come in under $500 if you shop around[2].<br>
<br>
Cheers,<br>
<br>
Patrick<br>
<br>
<br>
1. <a href="https://www.kernel.org/doc/Documentation/networking/bonding.txt" target="_blank">https://www.kernel.org/doc/Documentation/networking/bonding.txt</a><br>
2. <a href="http://www.ebay.com/bhp/10gb-network-card" target="_blank">http://www.ebay.com/bhp/10gb-network-card</a><br>
<div class="HOEnZb"><div class="h5">_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" target="_blank">http://www.plug.org.au/membership</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">Regards,<br><br>Adon Metcalfe</div></div>
</div>