Closer Look: EtherChannel Load Balancing

Overview

  • EtherChannel performs load balancing of traffic across links in the bundle.
  • Traffic is not necessarily distributed equally between all links.
  • How traffic is forwarded over an EtherChannel link is based on the results of a hashing algorithm.
  • Different options can be used to calculate the hash. The options vary from platform to platform.
  • Common options include:
    • Source IP address (src-ip)
    • Destination IP address (dst-ip)
    • Source and Destination IP address (src-dst-ip)
    • Source MAC address (src-mac)
    • Destination MAC address (dst-mac)
    • Source and Destination MAC address (src-dst-mac)
    • Source port number (src-port)
    • Destination port number (dst-port)
    • Source and Destination port number (src-dst-port)
  • The default option can change between switch platforms but usually it is scr-dst-ip.
  • The load balancing algorithm is applied to the whole switch. It is not possible to have different load balancing methods for different EtherChannels.
  • If only one address (MAC or IP) or port number is used, the switch looks at one or more low order bits of the hash value.
  • If two addresses are used, the switch performs an XOR operation to compute the hash value.
  • The hash value is used to decide over which link in the bundle to send the frame.

  • To achieve optimal traffic distribution, always bundle an even number of links.
  • For example, if there are four links in an EtherChannel, the algorithm will look at the last 2 bits. This means four indexes: 00, 01, 10, and 11. Each link in the bundle will get assigned one of these indexes. The ratio is 1:1:1:1.
  • If you use three links, 2 bits (four XOR results) are still needed to make a distinction. The four results are then spread over the three links in 2:1:1 ratio.

Example:

Say an EtherChannel has four links, and the hash results are distributed as follows: 00 (Eth1/1), 01 (Eth1/2), 10 (Eth1/3), and 11 (Eth1/4).  Which link would packets from 192.168.1.1 to 192.168.1.2 use? How about packets from  10.1.1.101 to 10.1.1.103?

11000000.10101000.000000001.00000001 (192.168.1.1)
11000000.10101000.000000001.00000010 (192.168.1.2)
>> XOR = 11

The packets would use Eth1/4.

00001010.00000001.00000001.01100101 (10.1.1.101)
00001010.00000001.00000001.01100111 (10.1.1.103)
>> XOR = 10

The packets would use Eth1/3.


Lab Demonstration

Let's view EtherChannel load balancing in action. Below is the topology.



1. Confirm the EtherChannel load balancing method.

SW1#show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
        src-dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
  IPv4: Source XOR Destination IP address
  IPv6: Source XOR Destination IP address


2. Clear the interface counters.

SW1#clear counters
Clear "show interface" counters on all interfaces [confirm]
*Aug 29 15:05:49.478: %CLEAR-5-COUNTERS: Clear counter on all interfaces by console


3. Perform an extended ping from PC1 to PC3.

PC1#ping        
Protocol [ip]:
Target IP address: 172.16.1.203
Repeat count [5]: 10000
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 172.16.1.203, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<...output omitted...>
Success rate is 100 percent (10000/10000), round-trip min/avg/max = 1/1/13 ms


4. Verify counters on SW1 for both EtherChannel interfaces.

SW1#show interface eth1/1 | include packets output
     10112 packets output, 1147982 bytes, 0 underruns

SW1#show interface eth1/2 | include packets output
     10 packets output, 1454 bytes, 0 underruns

Notice that most of the traffic went over the eth1/1 interface. How will packets be distributed for traffic between PC2 and PC3?


5. Clear the interface counters again.

SW1#clear counters
Clear "show interface" counters on all interfaces [confirm]
*Aug 29 15:12:28.560: %CLEAR-5-COUNTERS: Clear counter on all interfaces by console


6. Perform an extended ping from PC2 to PC3.

PC2#ping      
Protocol [ip]:
Target IP address: 172.16.1.203
Repeat count [5]: 10000
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 172.16.1.203, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<...output omitted...>
Success rate is 100 percent (10000/10000), round-trip min/avg/max = 1/1/1 ms


7. Verify counters on SW1 for both EtherChannel interfaces.

SW1#show interface eth1/1 | include packets output
     60 packets output, 4260 bytes, 0 underruns

SW1#show interface eth1/2 | include packets output
     10007 packets output, 1140881 bytes, 0 underruns

Notice that most traffic went over the eth1/2 interface now. What would happen if the load balancing method is changed to dst-ip?


8. Change the load balancing method on SW1 from src-dst-ip to dst-ip?

SW1(config)#port-channel load-balance dst-ip

SW1# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
        dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Destination MAC address
  IPv4: Destination IP address
  IPv6: Destination IP address


9. Clear the interface counters on SW1 once again.

SW1#clear counters
Clear "show interface" counters on all interfaces [confirm]
*Aug 29 15:14:22.632: %CLEAR-5-COUNTERS: Clear counter on all interfaces by console


10.  Perform an extended ping from PC1 to PC3.

PC1#ping      
Protocol [ip]:    
Target IP address: 172.16.1.203
Repeat count [5]: 10000      
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 172.16.1.203, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<...output omitted...>
Success rate is 100 percent (10000/10000), round-trip min/avg/max = 1/1/2 ms


11. Verify the counters on SW1 for both EtherChannel interfaces.

SW1#show interface eth1/1 | include packets output
     95 packets output, 6547 bytes, 0 underruns

SW1#show interface eth1/2 | include packets output
     10009 packets output, 1141394 bytes, 0 underruns


12. Clear the interfaces counters on SW1 yet again.

SW1#clear counters
Clear "show interface" counters on all interfaces [confirm]
*Aug 29 15:28:08.037: %CLEAR-5-COUNTERS: Clear counter on all interfaces by console


13. Perform an extended ping from PC2 to PC3.

PC2#ping
Protocol [ip]:
Target IP address: 172.16.1.203
Repeat count [5]: 10000
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 172.16.1.203, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<...output omitted...>
Success rate is 100 percent (10000/10000), round-trip min/avg/max = 1/1/1 ms


14. Verify the counters on SW1 for both EtherChannel interfaces.

SW1#show interface eth1/1 | include packets output
     74 packets output, 5139 bytes, 0 underruns

SW1#show interface eth1/2 | include packets output
     10007 packets output, 1140915 bytes, 0 underruns


Now that the load balancing method has been changed, the traffic will go over the same link. Since the only input information for the hash calculation is the destination IP address, the hash value will be the same regardless of the source address. In this example, the result is eth1/2.

There is a command (available at least on Catalyst 6500 Series switches) to view the physical interface within an EtherChannel selected for a particular packet. Unfortunately, I'm unable to show this because the emulator I used for the demonstration doesn't support the command.

Example:

PFC-3B#show etherchannel load-balance hash-result interface port-channel 
1 ip 10.1.1.1 10.2.2.2

Computed RBH: 0x1
Would select Gig3/2 of Po1

See: Troubleshoot Packet Flow in Cisco Catalyst 6500 Series Virtual Switching System 1440

Comments