Hello everyone. I’m new here on the portal but I found it very useful for everyday life. I have a problem to limit the bandwidth of clients in the IOS XR of a 9006. I am proceeding with the blocking according to what was done in the other versions but the same is not working. Can someone help me with this configuration?
At first glance, your configuration looks good. From your output, it seems that all traffic conformed to the policer, and was simply transmitted. What troubles me, and what is probably central to your question, is that the rate in kbps it states is 3965118 or 3.9Gbps, which is beyond the 3Gbps of the policer. In addition, we see N/A in the Transmitted value. Note also that all the traffic is considered conforming.
I suggest you consider the following to continue troubleshooting:
Ensure that traffic is being transmitted and is actually passing through and getting to its destination. Note that your access list is matching all IPv4 traffic. Is there any IPv6 traffic on the network?
Reduce the police rate in the policy-map to a very low number like 1 Mbps and make the exceed action transmit, (if this is a production network, such a configuration will not affect user traffic) and see if all of that excess traffic is still considered conforming or exceeding. This will allow you to see if it is simply a volume of traffic issue, or some misconfiguration.
I notice that you’ve applied the policer to an Ethernet Bundle. QoS on bundles is measured differently, and may actually exceed the maximum of the policer depending on how load balancing across the individual links takes place. Take a look at the following Cisco documentation to see this in detail:
This is the first time I use this forum.
I’m study for exam Cisco encore exam.
I’m now reading about QoS.
I alleways learned that you can only add QoS when you have a 1:1 connection.
I do not read anyting about this. why?
Great to have you with us! I’m not sure what you mean when you say 1:1 connection, however, QoS involves many mechanisms used to ensure that specific types of traffic get priority whenever there is congestion on the network. QoS includes things like markings of the headers of IP packets, CoS values in the tags of frames on Layer 2, as well as policing and shaping.
QoS markings in IP packets and frames are simply the way that data is catagorized on the network. It is used to identify what traffic should have special treatment, or priority. Shaping and policing are policies that are applied on specific interfaces that, when they experience congestion, will begin to function based on the markings of the packets/frames being processed, as well as on other configured parameters.
More about these can be found in the lessons within the QoS course:
Thanks for this course, it is very complete and easy to understand. I have some questions.
Is the congestion avoidance only for TCP traffic?
In 3.2, you said congestion management is named “queue”? is it Cisco naming or is it universal?
What is the difference between “tail drop” and “queue starvation”?
I have seen a question above about the behaviour of shaping when downloading a file for example. Does it make sense when we say “shaping in downlink”? Isn’t just the ISP policing which takes effect?
I see that traffic policies are always configured in an interface. is there a possibility to get the link congested but the interface not? because in this case, the QoS management will not take effect.
Yes. Congestion avoidance relies on the TCP window size mechanisms kicking in when segments are purposefully dropped. This mechanism will not work for UDP traffic.
In this section, Rene mentions that congestion management is achieved by using queueing mechanisms. This is something that is performed by all networking equipment and is not exclusive to Cisco.
Tail drop occurs when you have a single queue that becomes full, and the next packet to arrive cannot enter the queue, and is thus dropped.
Queue starvation occurs when you have multiple queues on an interface, one of which is the priority queue. The priority queue is always served first, and the rest of the queues must wait. However, if you have enough priority traffic, it may happen that the scheduler is so busy serving the priority queue that the other queues are never served. When the non-priority queues get full, the next packets trying to enter them will be dropped, resulting in queue starvation. To resolve this, a limit is set on the priority queue to ensure that queue starvation is avoided.
Now both tail drop and queue starvation essentially look the same (packets trying to enter a full queue are dropped), but they occur due to different circumstances, and in different queueing scenarios, and this is what makes them different.
Both policing and shaping can be configured on the ISP, on an enterprise’s edge router, or on both! It really depends on what you want to achieve. For example, your ISP may apply policing on both upload and download data, which means they will drop packets that exceed the limits they place. In order to avoid losing those packets, you can set up an additional shaping policy on your edge device which will ensure that data will be sent to the ISP at speeds below their policing thresholds, while still attempting to avoid dropping packets by employing queueing. (See this section to refresh your memory on the difference between policing and shaping). So the words you use to describe it depend on what has been configured and on what device(s).
QoS mechanisms will “kick in” only when there is congestion. If there is no congestion, all arriving packets are immediately served. Congestion that occurs on the interface in an outbound direction can have QoS mechanisms applied. Congestion can occur in an inbound direction as well, but there is no way to employ QoS as an interface is obligated to simply receive whatever traffic it is sent, and has no way of queuing anything if more traffic arrives than it can handle. It is the responsibility of the interface on the other end to employ QoS so that incoming traffic on the local interface does not overwhelm it.
Now you are making a distinction between a link and an interface, but these are essentially the same. The actual wire (if that’s what you mean by link) will not be able to carry anything more than whatever the interface sends. The wire is not a limiting factor (unless you are trying to run GigabitEthernet over a category 3 UTP cable!), and even if it was, there are no mechanisms on the wire itself to employ QoS. All the intelligence and mechanisms occur at the interfaces.
The first thing you should do is to determine how much bandwidth your voice and video will require. For a remote site with only four IP phones, 30Mbps is way too much. If you’re using the G711 codec, each conversation, along with headers, will consume close to 90 Kbps according to Cisco. For four conversations, that’s a maximum of 360Kbps. Now an HD video transmission shouldn’t take more than about 4 or 5Mbps, while lower quality video will take much less than that. That should give you an idea of what kind of bandwidth you should be reserved on the link.
As for the application of QoS on the IP phones, you must identify the traffic you want to prioritize, and then to apply the QoS mechanisms to do this.
For the first, you can take a look at these lessons which talk about the classification of traffic, ways in which you can mark this traffic for special treatment, and methods of doing this at both Layer 2 and Layer 3, depending upon your topology.
The classification and marking will be applied on the switch, so that voice traffic can be identified. The next thing you have to do is apply QoS mechanisms at the appropriate location on the network in order to provide the desired special treatment of the voice packets. Specifically, you want to apply this to the VPN connection. You can do this by either applying shaping or policing. You can find out more about these, including their differences at the following lessons:
Because these lessons deal with QoS on IOS devices, you can find out more about how QoS can be applied to Cisco ASA devices as well, so that the QoS can be implemented on your VPN:
1)As per BW characteristics we are dividing queue like first queue is 50% BW and 2nd queue is 20% and remaining to the 3rd queue so my query here is how packets will get processed during the congestion like first queue is for Voice traffic, 2nd one is data traffic, 3rd one is for any other traffic so when congestion occur router will put voice packets in queue 1 and data traffic is in queue 2 and so on, but here we also prioritize queue 1 so first of all queue 1 traffic will be transmitted and when all queue 1 traffic transmitted then it comes to queue 2 but suppose voice packets continuously coming and router putting them in to queue 1 then this way queue 2 packets will never get processed.
2)if suppose whole BW of queue 1 used by only two voice packets then in that case what about the other packets those are waiting to transmitted and in this case also how queue 2 will get it’s chance to transmit the data packets?
What you describe assumes that the second queue will only be processed if the first queue is empty, and the third queue will be processed only if the first two are empty, and so on. But this is not the case.
If the first queue is set to 50% bandwidth for example, then 50% of the bandwidth of the interface will be used to guarantee traffic at that rate. If more than 50% of the bandwidth tries to use that queue in a specific period of time, it won’t be allowed to do so, and packets will have to wait in the queue until the other queues are served as well, each one with the percentage or bandwidth that it has been configured with.
Think of it as a percentage of usage over time. In one second, the first queue, if it is set to 50% of the bandwidth, will be able to guarantee transmission at a priority for 0.5 seconds. Anything else that comes into that queue beyond this within the 1 second period will have to wait for the other queues to be served.
I Still not understood Marking, suppose as per topology switch done both Classification and Marking packet and further reaches to the router then why router will do marking and classification (if want to do) which is already done.
2)and by classification switch will know this is IP traffic, this is VoIP traffic then what marking mean, actually i want to know what information router will through marked packets like classification differentiate the traffic flowing through router.
3) Suppose we have a internetwork then at what router classification marking is
4) Does the scheduler (Round robin or CBWFQ) is already configured or we
have to configure that?
5) How the packet send out per queue on the basis of scheduler, How router will
know that 2 packet send out from queue 1 then 3 from queue 2 and so on
6) If suppose any segment lost in that case window size reduced by Half, is that
mean if 2 segment reduced simultaneously against 4 segments send by
sender then in that window size reduced to zero segments, am i right?
how and when receiver will get these lost segment?, are these lost segments
will be included in next group of segments? suppose window size limit
shrunk to 2 segment in that case next update will have only the have lost 2
segments else one segment in case of window size is 1 segment and
remaining one segment in next update with twice window size if
Classification refers to the process by which we determine what the traffic is. (voice, telnet, video, web, email, routing protocol… all kinds of traffic.) Based on this classification, you can then mark traffic, or have QoS features applied directly to the traffic.
Marking refers to the process by which you modify parameters in the IP header (DSCP) and the Ethernet Header (CoS) to identify particular frames and packets. Routers and switches can then be configured to act upon these markings in a particular way.
Acting upon classification and/or markings is what QoS does, and this can be applied at any router or switch whether on your internal network, or on an ISP network.
For information concerning scheduling, configuration, and their defaults, take a look at this Cisco Documentation.
We have an ASR9001 running IOS XR. We got heavy traffic on it , over 8Gbit out and 4Gbit in.
Last week we reach the physical limit of the 10Gbit port and so service stop working well.
Does it possible to setup some kind of QOS to prioritize at least TPC ACK, VOIP , SSH, BGP services ? I’ve tried to apply policy-map but I’m unable to get it working. ASR refuse to apply policy to the interface.
I’ve got following error: “Priority 4 or above is not supported on this line card”
class-map match-any BGP
match access-group ipv4 ACL-BGP
priority level 7
set dscp cs7
This error has to do with what the specific line card supports. You are assigning an egress priority level of 7 in this class map, however, according to this Cisco documentation about Modular QoS for the ASR series routers running IOS XR, any release prior to 5.3.2 supports Priority levels P1, P2, and P3 only. It’s a matter of IOS version and platform.
Take a look at the documentation and attempt to adjust your configuration accordingly. Let us know how it works out for you…
Each port supports four egress queues, one of which (queue 1) can be the egress expedite queue. These queues are configured by a queue-set. All traffic leaving an egress port flows through one of these four queues and is subjected to a threshold based on the QoS label assigned to the packet.
So it seems that yes, each egress interface does contain four egress queues. You can find out more details about this and how they function at the following link from where the above quote was taken.
In the shaping section and in the diagram where Rene says:
Above you can see we have 20 moments where we send for 10 ms. 20 x 10 = 200 ms in total. We have 20 pauses of 40 ms, 20x 40 = 800 ms in total.
CIR is 200Mbps and because of VoIP we want to minimize the one-way delay by using a Tc of 10ms. To achieve this Tc value using the formula we need a Bc = Tc * CIR = 10*200.000=2.000.000 bits.
If i have understood correclty the IOS automatically will calculate that in these 20 moments where we send for 10 ms each time we need to send 10 million bits to get *20 times = 200Mbits?
Thank you in advance.