Troubleshooting Etherchannel

Hi laz,
thanks for your reply.

The network i’m troubleshooting consist lot of IPTV client. My problem is when IPTV traffic coming from IBSE01, some of the IPTV02 client(IPTV traffic is via etherchannel to client) experiencing jerking and freeze, all client on IPTV01 is channel good.

When we change incoming traffic to IBSE02, all client at IPTV02 channel is good, and some of IPTV01 is jerking and freeze. Base to this observation, i suspect that one of etherchannel physical interface is problem.
image

what do you think?

thanks

Hello Izwan

With the behaviour you described I believe you are correct to suspect something is going on with the EtherChannel link. The first thing I would check is to see if any of the physical etherchannel links are reaching capacity. You can do this by looking at the output of the show interface command on NPE01 or NPE02 on the physical interfaces of the EtherChannel. This will give you a rough idea of what may be going on, but the best thing to do is to view the traffic from a traffic monitoring system using SNMP or Netflow. This way you’ll get a good view of the traffic over time, and if any links are reaching capacity. If so, you may have to adjust the load balancing algorithm on the EtherChannel link.

If no links are reaching capacity, then it is not a congestion issue, but possibly a delay issue. Delay may come as a result of suboptimal routing. Are you using multicast for your IPTV implementation? Is multicast routing correctly and optimally configured?

Once you identify if the problem is the EtherChannel or not, then we can go on to the next step of further troubleshooting the problem. Let us know how you come along.

I hope this has been helpful!

Laz

Hello,

Can someone please help me understand this?:

Why do the port speeds have to match in Etherchannel configurations?

A possible answer I could think of is that interfaces with different speeds can reorder the messages.

But that answer may not be complete, because if load-balancing is enabled, then frames from the same device are always sent over the same physical interface, so reordering doesn’t happen.
Source: on page 254 of the Official Cert Guide Volume 1, Wendell Odom writes:
"(…) the various load distribution algorithms do share some common goals: To cause all messages in a single application flow to use the same link in the channel, rather than being sent over different links. Doing so means that the switch will not inadvertently reorder the messages sent in that application flow by sending one message over a busy link that has a queue of waiting messages, while immediately sending the next message out an unused link. (…) "

If load-balancing is not enabled, then the messages can be reordered, but why is that a problem?

A possible answer is that for an application using UDP, the application on the other end won’t reorder the messages correctly, which results in confusing output for the end user, such as audio content that’s at a later time in the original file is going to be played before audio content that’s at an earlier time in the original file, etc. (So a short section from the end of the video is played before the next section from the beginning is played, etc.)

But this answer isn’t completely accurate either, because for an application using TCP, why would that cause any issues? Wouldn’t the receiving application wait for the messages with the earlier sequence numbers to arrive in order to play everything in the order that the sender intended it?

So the complete answer I could think of (and please correct me if I’m wrong) is that by enforcing the rule that all interfaces must have the same speed, we ensure that messages sent using UDP aren’t reordered. If all applications would use TCP, then there would be no reason to enforce this rule.

What do you think? I’d be interesting to read any responses.

Thanks.
Attila

Hello Attila

The requirement that port speeds have to match for EtherChannel is more a matter of design rather than a matter of “what would happen if…” Engineers that designed this feature required that the speeds of all interfaces be the same. Otherwise, the feature will not bundle the interfaces and simply will not work.

So what is the reason for this design? Because we can’t actually experiment to determine what effect a fully functioning EtherChannel bundle with different interface speeds would behave like, we can only speculate.

I don’t believe the reordering of the packets is the issue. That, as you mention, is taken care of by upper-layer protocols such as TCP. Even with UDP, there are mechanisms higher up that will deal with out-of-order packets.

The issue I believe has more to do with things like:

  1. Consistent throughput: When port speeds are consistent across all member links, the aggregated link can deliver a predictable and consistent level of throughput. If there were different port speeds, it would lead to uneven performance across the EtherChannel, causing congestion on slower links and underutilization of faster links.
  2. Load balancing: The various algorithms that can be used for this depend upon the speeds being the same across all physical links, otherwise they will be less effective, leading to a less balanced traffic distribution, potentially causing bottlenecks.

There may be other reasons as well, but these are the most important that I believe led engineers to design EtherChannel in this way.

I hope this has been helpful!

Laz

1 Like

Hi,

After I established port-channel with auto speed configurations , I changed one side speed to 10 and other one to 100 but I didnt see a reason line in the output of sh etherchannel 1 detail command also it didnt seem as down or anything, everything seems ok. , why?

Thanks

Hello Görgen

So are you trying to recreate the Interface Configuration Mismatch section of the lesson?

Can you give us some more information about your particular setup? When you changed the speed of one side to 10Mbps, did you change the speed on the portchannel interface or on one or more of the individual physical interfaces? When you do that, you didn’t see the same results as the lesson? Can you confirm the config you have on each and share with us the output you get to the show etherchannel 1 detail command?

Let us know so that we can help you further.

I hope this has been helpful!

Laz

Hello,
If you only have access to one of the switches, is there a way to know which of the two is the master or to know the priority that the other switch has configured?

Hello Diego

First of all, let me clarify that the concept of priorities you are referring to is used only with LACP. For more information about these priorities, take a look at this NetworkLessons note.

The switch with the lowest system priority is allowed to make decisions about what ports actively participate in EtherChannel. I assume that’s what you mean when you refer to the “master” switch.

So can you find out which of the two devices has the lowest system priority if you have access only to one device? Well, yes. Using the show lacp neighbor command. Here is an example of the output of this command:

SW2#show lacp neighbor 
Flags:  S - Device is requesting Slow LACPDUs 
        F - Device is requesting Fast LACPDUs
        A - Device is in Active mode       P - Device is in Passive mode     

Channel group 1 neighbors

Partner's information:

                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi0/1     SA      32768     5254.0005.8000  23s    0x0    0x1    0x2     0x3D  
Gi0/2     SA      32768     5254.0005.8000  17s    0x0    0x1    0x3     0x3D  

You can see the priority of the remote device is set to 32768. To check the priority of the local device, use the show lacp sys-id like so:

SW2#show lacp sys-id 
32768, 5254.0018.8000

If the system priority is the same, it is the lowest MAC address that is used. So using these commands, you can see which device becomes the decision-maker for establishing the aggregation.

I hope this has been helpful!

Laz

I keep reading conflicting information regarding max number of interfaces for PAgP and LACP.

Is it 8 active interfaces for both and 8 standby and does this apply to any interface speed?

Thanks

Hello Ahmed

For the most part, both PAgP and LACP as protocols support up to 8 active links. But a portchannel bundle can contain up to 16 links. In such a case, only 8 links will be active at any one time, while the rest of the links will be standby, meaning they will remain idle until one of the active links fails, at which point a standby link becomes activated.

Keep in mind however that these limitations are not only protocol dependant, but also platform dependant. For example, the Nexus 7K and 9K support up to 16 active links for LACP (PAgP is not supported at all). Some Catalyst 9500 and 9600 series devices also support more than 8 active links as well. You’ll have to check the documentation of each device for specifics.

So the conflicting information you found may have been a platform based difference rather than an issue of the protocol itself.

These limitations are not related to the interface speed. They are enforced whether you are using FastEthernet, GigabitEthernet, or TenGigabitEthernet. But remember, the interface speed must be the same on all participating links for EtherChannel to work correctly.

I hope this has been helpful!

Laz