You are correct, the bandwidth percentage used in such a configuration will indeed affect the likelihood of dropping packets for each case. But don’t confuse the term “weight” as you used it with the weighting used in the WRED mechanism.
Remember that WRED acts upon packets that are in the queue. That means that congestion is already happening, and packets are stored in the queue momentarily awaiting to be sent out. If you limit the bandwidth on a particular type of packet using a policy map, then you are increasing the likelihood of a packet of that type being queued. You are, in turn, increasing the likelihood of that queue becoming filled, reaching the thresholds defined by WRED, and thus eventually dropping packets.
So the bandwidth command in the policy map indirectly affects WRED simply because the smaller the bandwidth, the more likely the filling up of the queue, which in turn increases the probability of WRED dropping packets.
Thanks Laz, that makes more sense than what I had read elsewhere. They seemed to imply that there was some behind-the-scenes weighting that was happening due to the bandwidth percent given.
But I believe they were just abstracting what you described, that limiting the bandwidth percent will affect the likelihood of traffic being queued and therefore being dropped by WRED, not due to some weight mechanism just due to the nature of WRED and Queues.
In order to test your configuration, you can configure a traffic generator to generate traffic with specific DSCP values at a range of packet rates. Make sure the traffic generator supports the DSCP values that you have configured. YOu can then gradually increase the traffic rate from the traffic generator and observe the WRED behavior. Start with a rate below the minimum threshold of 1152 packets and increase it beyond the maximum 1536 threshold.
While increasing the traffic rate, you can monitor the packet drops on the router using the show policy-map interface command. You should see an increase in packet drops as you approach and exceed the maximum threshold. The rate of packet drops should increase gradually as WRED aims to prevent congestion by selectively dropping packets before the queue is full.
Compare the packet drop rate with the traffic rate and DSCP values to verify that WRED is working as expected. You should observe that packets with higher DSCP values (24, 26, 48, 56) have a higher drop probability than packets with lower DSCP values (25, 27, 28, 29, 30, 31) when the queue is between the minimum and maximum thresholds.
What traffic generator should you use? iPerf is an excellent traffic generator that can be used for this purpose. More about iPerf can be found here:
You can also find out more about WRED and various verification commands here:
If you do implement this test, feel free to let us know how you get along!
Hi Laz,
The packets for minimum and maximum threshold value are pps value in traffic generator… like have pps with range between minimum and maximum threshold…
Thanks
Rajesh L
You can experiment with various pps values to see what data rates or packet rates will trigger the lower and upper thresholds. Let us know how you get along in your experimentation.
TCP is designed to function this way. Remember that TCP is a reliable protocol which means that each segment that is sent is acknowledged, meaning that the sender is informed when data has successfully reached the receiver. If the sender is informed that for some reason a sent segment did not reach its destination (i.e. segments are dropped), it will respond by reducing the window size to one segment, as described in the lesson.
So TCP by design will respond to dropped segments by decreasing the window size.
I understand that WRED will give preferential treatment to higher-priority traffic and will be more “strict” towards low-priority.
However, if we have two traffic classes, therefore, two queues - one for high-priority traffic and one for low-priority traffic, wouldn’t drop something from the low-priority queue… like not help the high-priority queue at all? If they are two separate classes/queues with their own % of BW reserved.
In other words, I would understand this if low and high-priority traffic were in the same queue and the lower-priority one would be dropped more often, but they’re in separate queues, aren’t they? So, dropping something from the low-priority queue wouldn’t really help the high-priority queue to avoid congestion, or not?
Your thought process is correct. When you have two different traffic profiles as seen in the lesson, they are actually using the same queue! So each individual packet, based on its IP precedence is treated differently depending upon what queue depth you have, but they exist within the same queue. Does that make sense?
There is more to this story. You could configure WRED for a single queue where you queue multiple packets, or have WRED for multiple queues. For example:
class-map match-all PREC
match ip precedence 3
match ip precedence 5
policy-map WRED_SINGLE
class PREC
bandwidth 1000
random-detect
random-detect precedence 3 20 45 25
random-detect precedence 5 30 60 20
With a single queue, you would drop one packet before the other. Within the queue, one packet would have an advantage over the other. You can also configure it like this:
class-map match-all PREC5
match ip precedence 5
class-map match-all PREC3
match ip precedence 3
policy-map WRED_MULTIPLE
class PREC3
bandwidth 1000
random-detect
random-detect precedence 3 20 45 25
class PREC5
bandwidth 1000
random-detect
random-detect precedence 5 30 60 20
Each queue has its own bandwidth, so you could say that they don’t affect each other, but that’s not entirely true. (W)RED is about congestion avoidance. If you drop some packets, and prevent congestion, it affects the entire interface so if you drop packets from one queue, your other queue might be able to transmit more because there is no congestion.
Also, on switches your queues share a common physical buffer space. By dropping packets from one queue, you save buffer space which packets in other queues can use.