Troubleshooting Etherchannel

Hi laz,
thanks for your reply.

The network i’m troubleshooting consist lot of IPTV client. My problem is when IPTV traffic coming from IBSE01, some of the IPTV02 client(IPTV traffic is via etherchannel to client) experiencing jerking and freeze, all client on IPTV01 is channel good.

When we change incoming traffic to IBSE02, all client at IPTV02 channel is good, and some of IPTV01 is jerking and freeze. Base to this observation, i suspect that one of etherchannel physical interface is problem.
image

what do you think?

thanks

Hello Izwan

With the behaviour you described I believe you are correct to suspect something is going on with the EtherChannel link. The first thing I would check is to see if any of the physical etherchannel links are reaching capacity. You can do this by looking at the output of the show interface command on NPE01 or NPE02 on the physical interfaces of the EtherChannel. This will give you a rough idea of what may be going on, but the best thing to do is to view the traffic from a traffic monitoring system using SNMP or Netflow. This way you’ll get a good view of the traffic over time, and if any links are reaching capacity. If so, you may have to adjust the load balancing algorithm on the EtherChannel link.

If no links are reaching capacity, then it is not a congestion issue, but possibly a delay issue. Delay may come as a result of suboptimal routing. Are you using multicast for your IPTV implementation? Is multicast routing correctly and optimally configured?

Once you identify if the problem is the EtherChannel or not, then we can go on to the next step of further troubleshooting the problem. Let us know how you come along.

I hope this has been helpful!

Laz

Hello,

Can someone please help me understand this?:

Why do the port speeds have to match in Etherchannel configurations?

A possible answer I could think of is that interfaces with different speeds can reorder the messages.

But that answer may not be complete, because if load-balancing is enabled, then frames from the same device are always sent over the same physical interface, so reordering doesn’t happen.
Source: on page 254 of the Official Cert Guide Volume 1, Wendell Odom writes:
"(…) the various load distribution algorithms do share some common goals: To cause all messages in a single application flow to use the same link in the channel, rather than being sent over different links. Doing so means that the switch will not inadvertently reorder the messages sent in that application flow by sending one message over a busy link that has a queue of waiting messages, while immediately sending the next message out an unused link. (…) "

If load-balancing is not enabled, then the messages can be reordered, but why is that a problem?

A possible answer is that for an application using UDP, the application on the other end won’t reorder the messages correctly, which results in confusing output for the end user, such as audio content that’s at a later time in the original file is going to be played before audio content that’s at an earlier time in the original file, etc. (So a short section from the end of the video is played before the next section from the beginning is played, etc.)

But this answer isn’t completely accurate either, because for an application using TCP, why would that cause any issues? Wouldn’t the receiving application wait for the messages with the earlier sequence numbers to arrive in order to play everything in the order that the sender intended it?

So the complete answer I could think of (and please correct me if I’m wrong) is that by enforcing the rule that all interfaces must have the same speed, we ensure that messages sent using UDP aren’t reordered. If all applications would use TCP, then there would be no reason to enforce this rule.

What do you think? I’d be interesting to read any responses.

Thanks.
Attila

Hello Attila

The requirement that port speeds have to match for EtherChannel is more a matter of design rather than a matter of “what would happen if…” Engineers that designed this feature required that the speeds of all interfaces be the same. Otherwise, the feature will not bundle the interfaces and simply will not work.

So what is the reason for this design? Because we can’t actually experiment to determine what effect a fully functioning EtherChannel bundle with different interface speeds would behave like, we can only speculate.

I don’t believe the reordering of the packets is the issue. That, as you mention, is taken care of by upper-layer protocols such as TCP. Even with UDP, there are mechanisms higher up that will deal with out-of-order packets.

The issue I believe has more to do with things like:

  1. Consistent throughput: When port speeds are consistent across all member links, the aggregated link can deliver a predictable and consistent level of throughput. If there were different port speeds, it would lead to uneven performance across the EtherChannel, causing congestion on slower links and underutilization of faster links.
  2. Load balancing: The various algorithms that can be used for this depend upon the speeds being the same across all physical links, otherwise they will be less effective, leading to a less balanced traffic distribution, potentially causing bottlenecks.

There may be other reasons as well, but these are the most important that I believe led engineers to design EtherChannel in this way.

I hope this has been helpful!

Laz

1 Like