Regarding BW guarantees on CBWFQ

Hello All,
When allocating bandwidth to different classes using CBWFQ, is there a setting in Cisco that defines whether a class can/cannot utilize other queues when traffic exceeds the allocated/guaranteed bandwidth?
The assumption is there is no variation in priority settings.
Im working with Juniper, and am being told that bandwidth can essentially only be guaranteed if queue sharing is disabled. Id like to confirm Cisco documentation on whether there is something similar.

Hello Patrik

This was interesting to research because it seems that Cisco doesn’t publish the details of how its CBWFQ algorithm achieves the QoS functions. All the information I found was based on experimentation and analysis that various users have performed.

To answer your question truthfully, it seems that there is no way to allow or disallow the use of other queues (if they are empty) in the event that another queue has exceeded the allocated bandwidth. However, by default, it seems that if some queues are indeed empty, and do not need their bandwidth for a short period of time, that bandwidth is proportionally allocated across the other classes.

Remember, the CBWFQ scheduler, based on the class configurations, guarantees a minimum percentage of a link’s bandwidth. If it’s available, a higher percentage will be used as described above.

I hope this has been helpful!

Laz

1 Like

Thank you for researching this topic! I suppose the detailed behavior of each vendor can sometimes only be inferred through testing.
The issue I’m facing with a customer involves the seeming lack of a BW guarantee using CBWFQ on a Juniper device to which JTAC responded, that is to be expected unless overflowing queues are prevented from sharing the BW of queues determined to be “available.”
Despite what JTAC said, our recent lab tests nevertheless seem to indicate that a BW guarantee is in effect, so it’s unclear what’s causing the differences with production traffic. Assuming bursty traffic is somehow involved, but curious on the span in which BW allocation is actually guaranteed. For example, across one or multiple time intervals (TCs) of committed bursts(BCs).
I suspect this level of detail might not be disclosed by the vendor and it depends on factors such as shaping vs line rate

Hello Patrik

Hmm, that’s curious.

So did they make a suggestion as to how you can achieve what you need? It seems strange to simply say that there is no bandwidth guarentee unless overflowing queues are prevented from sharing. How do you prevent them from sharing, and is that an option for your topology? Is there a configuration parameter that configures that? CBWFQ by definition guarantees a minimum bandwidth to each class when there is congestion.

In any case, I’d be interested to hear how this plays out. If it’s not too much trouble, keep us posted and let us know of the resolution…

Thanks!

Laz

Hello Laz,
I let go of this case and handed it over to a separate team, so it’s been a while, but let me update you on what I’ve been told. it seems we were facing the issue due to oversubscribing a trunk interface. For example, we may have had 15 users accommodated under a 1G link, each shaped at 100M. Aggregate utilization during peak times was still less than 10% of the 1G interface, but this type of oversubscription apparently prevents CBWFQ BW allocation guarantees for QoS. Note, users still have full access according to their shaped subinterfaces.

The original suggestions JTAC made were to choose from either of the following.

  1. Adding an “exact” option to the allocated BW percentage which essentially shapes the traffic at the class-level. This guarantees bandwidth but also strictly reserves the allocation such that other overflowing queues cannot access it when underutilized.
  2. Adding a “rate-limit” option which essentially polices bandwidth to the allocated percentage. With this option, other overflowing queues can access the rate-limited queue as long as it is not congested. Of course the rate-limited class cannot access the bandwidth of other empty queues.

Reference to the options:

However, ultimately, JTAC advised we can add a “guaranteed-rate” option to our “transmit-rate” used to allocate BW per class so long as the sum of all these do not exceed the physical trunk BW either. In our case, this was not an issue at all as only a minority of circuits required QoS and we essentially had several circuits shaped above their needs.

I still find it somewhat surprising that BW guarantees within each class will not work unless the sum of all that are defined across all subinterfaces of a trunk remain below the trunk BW. Basically, that is saying that there is no way to benefit from the statistical multiplexing characteristics of ethernet.

Nevertheless, for our particular use case, this was not a challenge. I’m not sure if there are further workarounds or whether this is a platform-specific issue.

Best regards,
-Patrik

Hello Patrik

We really appreciate you sharing these results with us as they are very useful for future readers of the topic. It’s interesting to see how different vendors deal with different issues. The solution JTAC gave is ingenious, to say the least, and it should be sufficient for your needs.

Thanks again!

Laz

1 Like