WRED (Weighted Random Early Detection)

Got it. So just to sum up in my head what you are saying, when congestion hits even though one may have bandwidth guarantees configured on the policy taildrop will occur for that configured queue if it begins to exceed the configured bandwidth. However, wred can minimize the pain for that queue by randomly dropping packets as it exceeds the minimum threshold and when it hits the maximum.

Hello Network23

Not quite. Taildrop will occur only when the average queue depth reaches the minimum threshold.

Remember that a queue will only begin to fill up if congestion occurs on the link. Congestion will occur if traffic surpasses the physical speed of the interface or it reaches some policy that limits traffic on that link. Only then will the queue begin to fill up, and only when it reaches the minimum threshold will random drops start to occur…

Does that make sense?

I hope this has been helpful!

Laz

Yes it does and thanks again!

1 Like

I wanted to share this snippet of a policy that is applied to a 10G interface. This interface has never hit a congestion and it resides in a site that serves a backup. I was applying a new policy on this device on another interface and ran show policy-map interface on a different one because I wanted to see the output and noticed that there were random drop and tail drops for class default. This is strange because it has never been congested. Any thoughts on this? Below is the configuration for the policy-map. I scrubbed the name of the policy-map:

policy-map xxx
class realtime
  priority percent 38
 class call_signaling
  bandwidth percent 2 
 class network_mgmt
  bandwidth percent 5 
 class critical_data_1
  bandwidth percent 15 
  random-detect dscp-based
 class critical_data_2
  bandwidth percent 10 
 class scavenger
  bandwidth percent 1 
 class bulk_data_1
  bandwidth percent 5 
  random-detect dscp-based
 class class-default
  bandwidth percent 24 
  random-detect

Hello Network 23

“Congestion” can be considered one of two things:

  1. When there are no QoS thresholds configured, congestion will occur when the traffic approaches and attempts to exceed the physical data rate of the interface itself.
  2. When there are QoS thresholds configured (such as the bandwidth percent 24 command in the class-default config in your post), and those thresholds are approached and exceeded, we can consider this “congestion” although it is based on a limitation that you have administratively defined.

So in your case, this 10G port may never have reached its physical threshold, but it will have reached one or more of the thresholds set up in some of the classes in your policy-map configuration shown in your post. When it does reach those percentages, WRED will indeed kick in.

I hope this has been helpful!

Laz

Hello,

That could explain why there were still drops with no link congestion. The other strange thing is that I would have expected that class to pull from other classes since the bandwidth is available.

Hello Network23

According to this Cisco documentation, the random-detect command will start packet dropping based on the maximum count configured within the class. WRED does not “borrow” bandwidth from other classes, even if it is available.

I hope this has been helpful!

Laz

What I meant to say is that a class can use bandwidth from other classes if there is available bandwidth but that would only occur during times of congestion.

Thanks for following up on the documentation. Could you send me the link to that doc? Appreciate it and have a great weekend!!!

Hello Network

I apologize, I forgot to add the link! I was looking for it but was unable to find it. I think I may have found it somewhere in this command reference but I was unsuccessful in locating it…

I’ll keep looking, and let you know if I find it…

Laz

Hi ,

There is one sentence that’s a bit confusing, you mentioned :
The MPD (25%) is the number of packets that WRED drops when we hit the maximum threshold (45)
As per my understanding, if we hit the maximum threshold … all packets will be dropped, so what exactly is the function of MPD here ?

Thanks
Hisham

Hello Hisham

Yes, I see your confusion. I believe that this will clarify what is meant by this sentence:

For the specific example in the lesson:

  • for average queue depth values between 0 and 20, there are zero drops.
  • for average queue depth values between 20 and 45 there is an increasing number of drops depending upon the queue depth.
  • for average queue depth values exceeding 45, all packets are dropped.

For queue depth values between 20 and 45, the percentage of dropped packets goes from 0 to a maximum of 25%. The MPD is that value of 25%. This is also illustrated in the diagram in the lesson:

Indeed, as you say, for any value that exceeds an average queue depth of 45, all packets will be dropped.

I hope this has been helpful!

Laz

1 Like

I’ve read elsewhere where it seems to suggest that the bandwidth used when enabling WRED somehow influences the weight of the traffic matched in that policy-map, which seems to indicate that the more bandwidth is reserved, the less likely that matched traffic is to be dropped.

To put it into an example, if I were to match 2 classes of traffic in a policy-map: class A getting: bandwidth percent 50 & random-detect and class B gets bandwidth percent 30 & random-detect

Everything else equal, will traffic matched in class A be less likely to be dropped?

Hello Dustin

You are correct, the bandwidth percentage used in such a configuration will indeed affect the likelihood of dropping packets for each case. But don’t confuse the term “weight” as you used it with the weighting used in the WRED mechanism.

Remember that WRED acts upon packets that are in the queue. That means that congestion is already happening, and packets are stored in the queue momentarily awaiting to be sent out. If you limit the bandwidth on a particular type of packet using a policy map, then you are increasing the likelihood of a packet of that type being queued. You are, in turn, increasing the likelihood of that queue becoming filled, reaching the thresholds defined by WRED, and thus eventually dropping packets.

So the bandwidth command in the policy map indirectly affects WRED simply because the smaller the bandwidth, the more likely the filling up of the queue, which in turn increases the probability of WRED dropping packets.

I hope this has been helpful!

Laz

Thanks Laz, that makes more sense than what I had read elsewhere. They seemed to imply that there was some behind-the-scenes weighting that was happening due to the bandwidth percent given.

But I believe they were just abstracting what you described, that limiting the bandwidth percent will affect the likelihood of traffic being queued and therefore being dropped by WRED, not due to some weight mechanism just due to the nature of WRED and Queues.

1 Like

How do we test WRED with traffic generator…

For ex…
i have this config in IOS XR router …

class cos2_class_ctrl
  bandwidth remaining percent 60 
  random-detect dscp 24,26,48,56 1152 packets 1536 packets 
  random-detect dscp 25,27,28,29,30,31 576 packets 768 packets 
  queue-limit 1536 packets 
 !

how can we match and test the WRED threshold value with traffic generator
Minimum threshold – 1152 packets and Max threshold - 1536 packets

Hello Rajesh

In order to test your configuration, you can configure a traffic generator to generate traffic with specific DSCP values at a range of packet rates. Make sure the traffic generator supports the DSCP values that you have configured. YOu can then gradually increase the traffic rate from the traffic generator and observe the WRED behavior. Start with a rate below the minimum threshold of 1152 packets and increase it beyond the maximum 1536 threshold.

While increasing the traffic rate, you can monitor the packet drops on the router using the show policy-map interface command. You should see an increase in packet drops as you approach and exceed the maximum threshold. The rate of packet drops should increase gradually as WRED aims to prevent congestion by selectively dropping packets before the queue is full.

Compare the packet drop rate with the traffic rate and DSCP values to verify that WRED is working as expected. You should observe that packets with higher DSCP values (24, 26, 48, 56) have a higher drop probability than packets with lower DSCP values (25, 27, 28, 29, 30, 31) when the queue is between the minimum and maximum thresholds.

What traffic generator should you use? iPerf is an excellent traffic generator that can be used for this purpose. More about iPerf can be found here:

You can also find out more about WRED and various verification commands here:

If you do implement this test, feel free to let us know how you get along!

I hope this has been helpful!

Laz

Hi Laz,
The packets for minimum and maximum threshold value are pps value in traffic generator… like have pps with range between minimum and maximum threshold…
Thanks
Rajesh L

Hello Rajesh

You can experiment with various pps values to see what data rates or packet rates will trigger the lower and upper thresholds. Let us know how you get along in your experimentation.

I hope this has been helpful!

Laz

would you please explain how dropping packets decrease the TCP window size.

Hello Emad

TCP is designed to function this way. Remember that TCP is a reliable protocol which means that each segment that is sent is acknowledged, meaning that the sender is informed when data has successfully reached the receiver. If the sender is informed that for some reason a sent segment did not reach its destination (i.e. segments are dropped), it will respond by reducing the window size to one segment, as described in the lesson.

So TCP by design will respond to dropped segments by decreasing the window size.

I hope this has been helpful!

Laz