Introduction to QoS (Quality of Service)

This topic is to discuss the following lesson:

1 Like

Great article as always Rene.

1 Like

Hi Rene,
When you are talking about customer transmitting ("shaping " part of the lesson ), I believe you are talking about uploading ? .
What if a customer downloading How we can survive from ISP dropping the traffic ?

Thanks

Hello Sims

When talking about shaping on the customer side, it can be applied for both uploading and downloading. When implementing on the customer side, the interface can be configured for shaping since it has memory allocated for both an Input queue and Output queue.

So this can be done for both uploading and downloading.

I hope this has been helpful!

Laz

2 Likes

Hi,
lan connected to gi0/1 and gi0/0 is connected to ISP

If I have a qos policy(for download) applied on my router
for example vlan 10 (5 Mbps) and vlan 20 (3Mbps) , Vlan 30 (Remaining) .
When the traffic reach on my router the policy will be enforced .
What will happen to the same traffic on ISP’S router .
Thanks

Hello Sims

In the example you describe, I am assuming that you have a layer 3 switch for example, where you set up a QoS policy on the SVIs of each individual VLAN being used internally on the network. (You can also use a router with subinterfaces, each on one of the mentioned VLANs). Also, I am assuming that you are talking specifically about shaping or limiting the traffic on each VLAN as indicated in your text.

So if you have a shaping or rate limitig policy on each of those VLANs, then the policy will be applied only to traffic that goes through those interfaces. This information is not conveyed in any way to the ISP router, so the same traffic that traverses the ISP router and enters your L3 switch will not be policed at the ISP. This means that if additional traffic comes through the ISP greater than the shaping limits, then the L3 switch must take care of this extra traffic using its interface queueing mechanism, that is, memory allocated for packet buffers at each interface. If the queues are saturated, then packets will be dropped.

I hope this has been helpful!

Laz

1 Like

Hi,
As per your first video ,most of the implementaion is 1 Gb edge switches and the backbone 10 Gb.and The Issue comes at internet traffic .So where will enforce the qos ?
Internet router or edge switch

Thanks

Hello Sims

In the specific example in the first video, QoS parameters can be configured on the switch so that frames that are being transmitted from the PC have a low priority and the voice packets being transmitted from the phone will have a high priority. These markings or classifications are configured in the switch.

The router must also be configured to manage the packets in the appropriate way, giving priority to those marked as such.

So generally speaking, the marking or classification for QoS of the frames/packets is done by the switch and the prioritisation on the link is managed by the router.

I hope this has been helpful.

Laz

Hi, Rene, thanks for the guidelines about video traffic as below:

 One-way delay: 200 – 400 ms.
 Jitter:30 – 50 ms.
 Loss: 0.1% - 1 %

I was wondering can it also work on surveillance cameras ?

So that I can do basic calculations about how many bandwidth required of building a surveillance system, thank you.

Hello Wallace

The one way delay for video that Rene mentions is usually for videoconferencing or video telephony where there is a two way conversation taking place and any longer delays would become bothersome. For video that has only one direction, like broadcast video or surveillance cameras, delay is not that important. So there is more leeway for the delay, however, jitter and packet loss will still have an affect on the quality of the video being sent.

I hope this has been helpful!

Laz

1 Like

Hi Rene and Team,

This may sound silly. I am reading the queuing and congestion management. It seems to be that the queue scheduler is always handling a queue at a single time.

Does it clear all the packets in a queue before moving to another ?
If the scheduler is always handling 1 queue at a time, why is there a need to limit the bandwidth ? - might as well give the queue all the bandwidth it needs so that it can clear the queue faster ?

But again, that doesn’t seems right and it seems like the queues are handle concurrently thus the need to limit the bandwidth for each queue…

I can’t twist my head around this…

Interesting talk about Congestion Avoidance. It is a new concept for me.

You explain the congestion avoidance mechanism by randomly droping tcp segments (L4) but at the end you say drop packets. So i guess you are still refering at L4 despite mentioning packets… didn’t you ?

Hello Juan

The congestion avoidance mechanism is a mechanism that functions within the framework of queuing. Queuing takes place at layer 3 and thus it has to do exclusively with IP packets. Now when it says that the congestion avoidance tool will drop TCP segments, it should more correctly read “it will drop IP packets, which in turn causes the encapsulated TCP segments to be dropped, in the hope of reducing the window size…”

So if you drop the IP packet, you’re essentially dropping the encapsulated TCP segment contained within it.

I hope this has been helpful!

Laz

is round-trip delay the same as round-trip time ? That question is because the round-trip time we usually doing a ping to a host.
Another question is if we can get only the one-way delay and jitter using Cisco IOS CLI.

Hello Juan

Round trip delay, also called round trip time, or round trip latency, are all the same thing. So yes, you can measure the round trip delay simply using a ping to a host.

Now as far as I know there is no way to directly get an instantaneous measure of jitter or one way delay directly from the Cisco IOS. However, there are some tools that allow you to measure and record these values. For jitter, both IP SLAs as well as Cisco Performance Monitor are capable of recording jitter values over the course of a period of time. More on these can be found here:



As for one way delay there are some additional tools that you can use. These are usually employed for voice and video networks to test the suitability of a network to carry such time sensitive services. A detailed description of some of these tools can be found below:

I hope this has been helpful!

Laz

Do I put QoS on the access ports? If so, why? What is the consequence of putting on trunk ports?

Do I put QoS on the access ports? If so, why? What is the consequence of putting on trunk ports?

Hello David

QoS functions at Layer 3 and Layer 2. Layer 3 QoS will operate when routing packets, as the QoS information is found within the header of the IP packet. At layer 2, QoS information is found within the 802.1Q tag, or the VLAN tag. This VLAN tag exists only on frames that traverse trunk ports. Since frames that enter and exit an access port do not have VLAN tags which contain the tagging information, QoS based on markings cannot be implemented at access ports. There is an exception to this rule, which is access ports that are configured with an additional voice VLAN. This is because voice VLAN frames do have a VLAN tag and can be handled differently than other frames.

This documentation further describes the default Ingress QoS behaviour for access ports.

Enabling QoS on trunk ports will provide rules for handling frames and sending them over the trunk whenever there is congestion. Based on the information in the VLAN tags, priorities are set and acted upon. More info about this can be found at this lesson:

I hope this has been helpful!

Laz

1 Like

I am trying to wrap my head on how the traffic flows and how qos is passed on from one device to another. Lets say that I have a qos config on my switch located on site-A that marked a voice traffic. VoIP server is located on a remote site. So the next hop of the packet will be on Site-A’s router which is then connected say a T1 back to the DC where the VoIP server is located. Question, qos packet coming from the switch is in layer-2, how does the router examine the packet and tag that packet as a priority on layer-3 side prior to forwarding it to the DC? I’ve been looking for some example in a hope that it would shed some light. Hope this question makes sense.

Thank you, in advance for your time on this question.

Hello Benjamin

It’s important to understand that QoS mechanisms take place at both Layer 2 and Layer 3.

At Layer 2, a frame contains all the QoS information in the 802.1Q tag. Remember that this tag only exists on links that are trunks or on voice ports. This means that a frame at layer 2 can only have QoS information if it is travelling over a trunk or if it has just exited an IP phone connected to a switch port configured with a Voice VLAN. Such QoS mechanisms allow switches to correctly prioritize their frames over trunks so that congestion will not affect their transmission. Such QoS mechanisms are local to the switch.

Layer 3 Mechanisms are controlled in the IP header, specifically in the ToS field. These are read by routers and allow them to route voice packets with the necessary priority to avoid delay.

Now it is possible to configure a switch to convert the QoS information in L2 to QoS information for L3 so that routed packets will be treated as they should based on the QoS info found at L2.

But ultimately, to get to your question of how does a route tag a packet as priority, it really depends on how you have set up your QoS.

Voice packets are usually marked at both L2 and L3 at the source, such as an IP phone. The phone will place the appropriate QoS markings so that the network (if properly configured) will treat the packet with the appropriate priority. Devices such as switches and routers can then be configured to appropriately prioritize such packets, or to even modify the QoS markings of those packets (or can even be configured to ignore them!).

The process of examining the traffic and identifying to what kind of application it belongs is called classification. You can find out more about it here:


The process by which classified traffic’s QoS indicators can be modified is called QoS marking, and more about this can be found here:

How the classified and marked traffic is then managed by network devices can be seen in various QoS mechanisms including queuing, shaping, and policing, all of which can be found in lessons found in the QoS course:

I hope this has been helpful!

Laz

1 Like