Hi Rene,
Thanks for the QoS explanation. How would I implement in a broadband (PPPoE) environment where radius assigns the up/down bit rate for users?
Rohan
Hi Rene,
Thanks for the QoS explanation. How would I implement in a broadband (PPPoE) environment where radius assigns the up/down bit rate for users?
Rohan
Hi Rohan,
You mean on the client side? Or on the provider side where you can attach a policy for each PPPoE session?
Rene
Hello Rene,
Yes. On the provider side.
Rohan
Rene,
Apparently the attributes returned by Radius will be assigned automatically to PPPoE sessions once the policy is not configured directly on any interfaces.
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 64000
Radius attribute:
sub-qos-policy-in = SHAPE_AVERAGE
Rohan
@ReneMolenaar @lagapidis @andrew Hi. I need to simulate WAN environment in lab. Precisely: need to increase delay between to max (200 ms letâs say) from what I have now 1/2 ms!!! Is this possible anyway between Cisco routers without using external simulator?
Hello Deep
There are a few of ways that come to mind that you could do this without the use of a simulator. These are not quite as elegant as would be desired, but itâs worth a shot.
The first is to increase the processing time for the packets. You could for example use a second server to generate extra traffic and then shape the interface to a low value. This would delay the packets but not drop them. You are limited by the interface queue sizes so you may end up dropping traffic before you introduce enough delay to the target flow, but itâs worth a shot.
Secondly, if you have a couple of spare routers with serial ports, you can connect them together and place them between your source and destination. You can configure a low clock rate and introduce a bottleneck.
Thirdly, you could use a PC with two network cards and use an open source firewall software such as Monowall where you can configure a pipe and set the latency you desire for traffic.
I hope this has been helpful!
Laz
Thanks Laz for the reply. I could easily create more than 200ms delay in Linux but was not that easy in cisco. I was getting loss when tuned the interfaces aggressively earlier. I limited the BW on on interface (policed) and applied outbound shaping (its only outbound anyway) to get more than 100ms delay on directly connected interfaces. It works. Thanks a lot.
Hello Rene/Laz,
I have a question and I am going to use the below topology for my question.
We have a few remote sites that are connected to the headquarter through private circuits. Now the thing is that we are increasing our bandwidth on each remote site. Therefore, I need to increase the bandwidth in the QoS Shaper on edge routers on each site and test the bandwidth to make sure the service provider has increased the bandwidth on their side. I was thinking to install Jperf on a remote site PC to test the bandwidth and it seems like that is not a very convenient solution. I have to come up with something that is easier and faster since I have to do whole bunch of sites. I also thought about running speed tests on remote site PCs, but speed test is not accurate all the time. Is there any test that I can run from the remote site routers to test the bandwidth?
Thanks in advance.
Azm
Hi Azm,
On a router, you could use IP SLA or TTCP but they donât really offer any real throughput. You canât really use those tools to test your bandwidth.
I think your best option is iperf, not jperfâŚno need to hassle with a GUI and java. With iperf, you can set up a receiver on your HQ and then paste the same command on a PC at each branch router to transmit.
Itâs not as ideal as using a router as the transmitter but it does allow you to actually test the bandwidth up to a high rate.
Rene
Hello Rene,
I have a 3-tier MQC config example here, but I donât know how to appropriately interpret it and have some questions to ask.
In the below example, in level-1 police-map:
In the below example, in level-2 police-map:
In the below example, in level-3 police-map:
====Config example====
interface GigabitEthernet0/0.5
bandwidth 9856
encapsulation dot1Q 5
ip address 10.1.1.1 255.255.255.252
no cdp enable
service-policy output Level1
policy-map Level1
class class-default
shape average 10000000 40000 0 account user-defined 28
service-policy Level2
policy-map Level2
class Level3-Voice-Traffic
priority
queue-limit 2048 packets
service-policy Level3-Voice
class Level3-any-traffic
bandwidth remaining percent 60 account user-defined 28
queue-limit 1024 packets
random-detect dscp-based
random-detect exponential-weighting-constant 1
random-detect dscp 26 384 512 10
random-detect dscp 28 256 384 10
service-policy Level3-any
policy-map Level3-Voice
class Level3-Voice-Traffic
police 4000000 500000 500000 conform-action set-dscp-transmit ef exceed-action drop
policy-map Level3-any
class Level3-Any-Traffic
police 6000000 750000 750000 conform-action set-dscp-transmit af31 exceed-action set-dscp-transmit af32
class-map Level3-Voice-Traffic
match access-group name Voice-Traffic
class-map Level3-Any-Traffic
match any
ip access-list extended Voice-Traffic
permit ip any any dscp ef
permit ip any 10.150.0.0 0.0.255.255
==============================
Regards,
Ray
Hello Laz,
I am trying to configure a port on a 3850 as the routed port that is going to an upstream service provider private cloud. If I configure QoS shaper on the routed uplink port, how is it going to behave?
I have another question and I am going to use the below configuration as the reference. What would be the impact of using fair-queue here? Someone said, if I use fair-queue, I am not going to get the full 100 Mb bandwidth because fair-queue will cause flow drop. Would you please explain it to me?
policy-map WAN_LINK
class VOICE
priority percent 33
class NETWORK
bandwidth percent 7
class CRITICAL-APP
bandwidth percent 35
fair-queue
random-detect dscp-based
class class-default
bandwidth percent 25
fair-queue
random-detect dscp-based
policy-map 100Mb_SHAPER
class class-default
shape average 100000000
service-policy WAN_LINK
Thanks in advance.
Best regards,
Azm Uddin
Hello AZM
An excellent lesson that will probably have all the info you need for this specific implementation is the following:
Look it over and if you have any more specific questions, please feel free to ask!!
As for the second question, the fair-queue command specifies the number of dynamic queues to be reserved for use by the class-default class as part of the default class policy. The number of queues can be specified, but if it is not as in your example, a default value is defined depending on the bandwidth of the interface.
For more information about this command and its parameters, take a look at this Cisco documentation.
When used along with the bandwidth command, as it is in the example you gave, then Cisco gives the following explanation:
If a default class is configured with the bandwidth policy-map class configuration command, all unclassified traffic is put into a single FIFO queue and given treatment according to the configured bandwidth. If a default class is configured with the fair-queue command, all unclassified traffic is flow classified and given best-effort treatment. If no default class is configured, then by default the traffic that does not match any of the configured classes is flow classified and given best-effort treatment. Once a packet is classified, all of the standard mechanisms that can be used to differentiate service among the classes apply.
(Taken from here)
I hope this gives you some guidance as to the use of the command and the affect it will have in the specific configuration.
I hope this has been helpful!
Laz
Hello Laz,
I am sorry for the confusion. I was asking the question from shaping performance perspective. Since I am configuring shaper on a 3850 routed port, is the 3850 routed port going to perform shaping and queuing the same way as a router like ISR 4331 or 4321 would do or since it is a switch, it does not have the full blown shaping and queuing capability like a router?
Best Regards,
Azm Uddin
Hello AZM
I donât have any direct experience with the differences in performance between queuing and shaping mechanisms on a routed port on an L3 switch and similar configurations on an ISR. The methodology and logic behind the configuration in each case is similar, but the performance will be more affected by the resources provided by each platform such as memory available for each queue, ASIC architecture and raw CPU power. Some of the new 6800 series Catalyst switches have exceptional hardware for queuing providing up to 500 MB (yes megabytes) buffers for queuing per port!
Iâll give it over to @ReneMolenaar to answer more specifically for you as he has probably had more experience than me in these matters.
I hope this has been helpful!
Laz
Thank you so much LazâŚ
What is the name of your terminal app that you using it in your videos ?
thank you
Hi Mohanad,
Nowadays, I use SecureCRT for everything. Itâs not free but it looks good and clean, you can run (python) scripts through it, it has a decent session manager, etc.
A free alternative is xshell.
Rene
Oh ok
I have a paid version in my work so how you can change the background to the guad ruled
Like this ??
Hello Mohanad
SecureCRT is indeed quite versatile and can do quite a lot. In order to configure the colour scheme and backgrounds, you can check out this link from VanDyke systems, the creator of the software.
I hope this has been helpful!
Laz
Hi Team,
While using traffic shaping, suppose a class-map with default class is reserved with only 10% of bandwidth of 1G physical interface and at the same time there is no congestion occuring on the same physical interface, does the router drops packets if it exceeds the reserved bandwidth i.e. 10% of physical interface?
And if the queue-limit is increased to a maximum limit, does the excess traffic classified as default traffic gets bufferred even if the there is no congestion on the same interface?
Note: Platform considered here is Cisco IOS-XE and IOS-XR