Bandwidth vs Bandwidth Remaining

This topic is to discuss the following lesson:

Hi Rene, for BW remaining percentage config, if other classes such AF11, AF21 and AF31 are not in use, then can the Default class take the unused BW from these 3 classes OR will BW remaining percentage for AF11(8 Mbps) , AF21(16 Mbps) or AF31(24 Mbps) will always be reserved for them and cant be used by Default or any other class?

Hello Sandeep

Remember, that these values are referring to bandwidth guarantees that exist during congestion. In other words, this is reserved bandwidth for that particular type of traffic.

Now if there is no AF11, AF21, and AF31 traffic at a particular time, then all other configured traffic types (EF and class-default) will share the full bandwidth of the interface as needed. But such traffic will be served in a best-effort manner, no guaranteed bandwidth is provided beyond what has been configured.

So if you were to examine the results of such a situation in the lab, you would see that any additional bandwidth available would show up under the class-default.

But be aware that this is not guaranteed bandwidth that you are seeing, but best effort.

I hope this has been helpful!


Remember, when configuring these values (whether percentage or percentage remaining) they define the minimum guaranteed bandwidth

1 Like

First at all, thank you with the great work you’re doing with NetworkLessons. After reading this lessons I have one doubt. If I understood it correctly from the “Introduction to Qos” lesson, QoS can only be applied to trunks interfaces or to routed interfaces according to the folowing words from Lazaros in that section’s forum:

“QoS functions at Layer 3 and Layer 2. Layer 3 QoS will operate when routing packets, as the QoS information is found within the header of the IP packet. At layer 2, QoS information is found within the 802.1Q tag, or the VLAN tag. This VLAN tag exists only on frames that traverse trunk ports. Since frames that enter and exit an access port do not have VLAN tags which contain the tagging information, QoS based on markings cannot be implemented at access ports. There is an exception to this rule, which is access ports that are configured with an additional voice VLAN. This is because voice VLAN frames do have a VLAN tag and can be handled differently than other frames”

However, in the current lesson it’s being applied to non-routed non-trunk interfaces. Did I miss something? The interfaces used in this lessons are configured as follow:

interface GigabitEthernet1/0/27
interface GigabitEthernet1/0/48
 service-policy output BANDWIDTH-REMAINING

So there is no switchport mode trunk to convert them to trunks nor they have IP addresses making them routed interfaces.

Another question: what’s the difference between the priority command used in the previous lesson and the police rate command used in the previous. Both assign the traffic to a LLQ up to a certain maximum per second, don’t they?

Additionally, I’d like to point out 2 that I’ve detected a minor error: The sum “10 + 20 + 30 = 70 Mbps” is wrong. Those values sum 60, not 70.

Hello José

Yes, I see the confusion. Let me clarify my statement. QoS mechanisms that use DSCP and CoS values to make their decisions require that the DSCP and CoS information exists. For Layer 2 QoS, only tagged frames can participate in QoS because the CoS value exists within the tag. Therefore, only trunk ports (which use tagged frames) can apply Layer 2 QoS policies. For Layer 3 QoS where DSCP values are used, using a policy map as in this lesson, it is possible to queue packets with a particular priority based on the DSCP values found in the IP header. Typically, a Layer 2 port will not be able to “read” information in the Layer 3 header, however, there are switch models with the appropriate IOS that are able to read those Layer 3 DSCP values and queue accordingly.

The priority command in the QoS LLQ lesson is used to define the priority queue within the LLQ scheme. Using that keyword makes all packets that match bypass the CBWFQ scheduler. The police rate command here is simply defining a rate at which to police traffic. However, the important thing here is that it is put under a priority level command. As shown below, the priority level command enables you to configure multiple priority queues. In this case, Rene has only created one, but it is essentially the same as using the priority command in the LLQ lesson.

Yes, thanks for that, I’ll let Rene know to make the correction.

I hope this has been helpful!


1 Like

First policy-map Bandwidth , class EF drops almost 138,138 pckts,
Second policy-map Bandwidth-Remaining, the same class drops almost 57,684 pckts.
Shouldn’t these be almost the same since it is at 20% police of the bandwidth?

Hello Konstantinos

The value of the number of drops shown is the total number of drops that have taken place at the time of the issue of the command. If you wait longer and reissue the command, the number will increase. So the value that you see for the first policy map and the second policy map are simply numbers that have accumulated when the command was issued. They have no correlation to the actual percentage of dropped packets and cannot simply be compared.

I hope this has been helpful!


True or False questions:

  1. Without a policer on the LLQ, the LLQ is guaranteed 100% bandwidth, regardless of the presence of a configured bandwidth.

  2. Without a policer on the LLQ, there is no point in defining bandwidth on the CBWFQs, because those would not be guaranteed.

  3. Without a policer on the LLQ, the only reason for defining a bandwidth on the LLQ is to “manipulate” the remaining bandwidth. The LLQ will still get 100% if it is not policed.

  4. If there is no traffic for the LLQ, then the remaining bandwidth is 100%.
    (If True, question 2 must be False.)

Forgive me, I tend to ask questions that everyone else takes for granted. It’s a disorder I think.

Hello Kevin

I’ll do my best to respond to each question below:

False. The LLQ feature allows you to prioritize specific traffic types in a queue to ensure timely delivery of important or delay-sensitive traffic. However, configuring LLQ on a network device does not guarantee that the queue will receive 100% of the available bandwidth.

You can configure bandwidth allocation for different classes within the LLQ, but the actual bandwidth used by the LLQ is subject to the availability of network resources and the presence of competing traffic. If there is no contention for bandwidth, the LLQ traffic may consume as much bandwidth as it requires, but during times of congestion, the network device will try to allocate bandwidth according to the configured values.

False. CBWFQ is a mechanism used to allocate bandwidth fairly among different traffic classes. It ensures that each traffic class receives a minimum amount of bandwidth during periods of congestion. Defining bandwidth for CBWFQ classes is still useful even without a policer on the LLQ.

While it is true that LLQ prioritizes specific traffic types and may consume more bandwidth if left unconstrained, CBWFQ classes are still guaranteed their configured minimum bandwidth. If the LLQ traffic does not fully utilize the available bandwidth, the remaining bandwidth will be fairly distributed among other CBWFQ classes.

True. If you don’t apply a policer to the LLQ, it can consume as much bandwidth as it needs, as long as there is no contention for bandwidth. However, defining a bandwidth value for the LLQ still has a purpose – it allows you to “reserve” a specific amount of bandwidth for the priority queue, which in turn influences the remaining bandwidth allocation for other CBWFQ classes.

The scheduler will use the defined bandwidth value for the LLQ to calculate the remaining bandwidth that can be allocated to other CBWFQ classes. Even though the LLQ can consume more than the configured bandwidth if it’s not policed, having a bandwidth value assigned helps manage the distribution of network resources among other traffic classes.

True. If there is no traffic for the LLQ, then it won’t consume any bandwidth. In this scenario, the entire available bandwidth (100%) can be utilized by the remaining CBWFQ classes, based on their configured bandwidth allocations and the network’s current traffic demands.

LLQ is designed to prioritize delay-sensitive or critical traffic, but when there is no such traffic present, other classes will have the opportunity to use the full capacity of the available bandwidth.

I hope this has been helpful!