Micro Bursts and Hardware logic

Hello everyone,

Couldn’t find better place to ask this question.

In the CCIE Written Exam there is a topic called “micro bursts”

First of all, I get the big idea behind them and how they impact the network.
What I couldn’t understand and figure is what are them?
to better understand them, I should understand better what are normal bursts but I just can’t find any good explenation that would satisfy my needed knowledge about them so I hope you could answer realy clearly with as much information as you could:

  1. what are bursts?
  2. when will I see those and when they apear on the link?
  3. how the infrastructures like routers and switches proccess them?
  4. how do I capture a burst using wireshark?
  5. how big should be a burst traffic on MetroE service provider network running IS-IS with mpls?
  6. what are the main cons for using bursts which limited carefully by the configuration administrator?
  7. whar are the pros?

After I will get the whole picture and neccesary information about those, then I might need to ask another questions about the CCIE topic - micro bursts:

  1. what are micro bursts?
  2. how they differ from normal bursts?
  3. could I capture them on wireshark let alone some traffic analyzing tool?
    4.5) how do they proccessed on routers and switches?
  4. how do they impact network performance?
  5. for what reason would they apear on the network and how often?
  6. how can I diagnose microbursts if I have the right analyzing tools and how to solve the problems?
  7. if question 6 is already solved by the nature of engineered routers / switches, is that because they have increased memory buffer?
  8. how big is the 4 queues buffers that each NIC on 6500 switch owns?

Thank you very much, there are some logical topics that migrate with the questions that might also help me understand better QoS and Policy engineering that I don’t realy understand very well beacuse they are explained for people which don’t realy have information about real hardaware and how it works, but because I am graduated practical engineer and I learned a lot about computing hardware, then I can’t match the logical explenation about bunch of stuff and how they work from the point of view of NIC, Route Processor, supervisor module and etc that people are trying to explain when asked about them by people that have no clue about hardaware, while trying to make them understandable for those people.

So I hope someone could help me to understand those stuff better!
Thank you very much

Hello Nitay

I appreciate your focus on details, and your thoroughness in your questions. In this post, I’ll be able to give you an overview of the specific topic on the CCIE exam topics list. Specifically, this topic is found under the “1.1.c Explain General Network Challenges” section and is called “1.1.c [iv] Impact of micro burst”. (See the Cisco topics list here.) For a more detailed explanation of all of the questions you pose, you can use the site’s Lesson Ideas page to suggest lesson topics that will be able to cover all of your inquiries.

For the purposes of the exam, it is important to note that this is found in section 1.1 Network Theory, so there is not that much technical or configuration implementation involved with this topic, so a good understanding of the theory is important here.

A micro burst is a spike of traffic that takes place in a very short interval of time, typically anywhere from a microsecond to less than a second. This causes a network interface to become temporarily oversubscribed and to drop traffic. While bursty traffic is normal in networks, these extremely short interval spikes can be more than the buffer or interface can handle. The main problem here is that typical network monitoring systems do not pick up these events as these systems usually monitor and record averaged traffic over several minutes or more. So any extreme spikes will not be recorded. Even though the traffic doesn’t show up on such systems, microbursts can usually be identified as drops on interface statistics that occur frequently and evenly distributed over time. When these drops coincide with higher monitored traffic, one can infer that microbursts are to blame. To correct such problems, multiple network analysis and monitoring tools and high frequency traffic analysis is needed to determine what traffic is causing the microbursts.

I hope this has been helpful!

Laz

Thanks for your respond @lagapidis.

I’ve read your explenation a bunch of times from the real source of it and coulnd’t understan dit since it seems like a big nonsense and not even logical at all.

for example:

this is a big nonsense.
how could a little spike of bursts whichprobbaly equals to 10 bits of data make an overload on the interface?

again, how possibly could a little amount of data which probbaly equals to 10 bits - overload the linecard’s buffer much more then would a normal burst of 1500 bytes?

I understan dhow traffic analyzing tools won’t catch the micobursts but why would the explenation changed the subject from a microburst into extreme spikes? isn’t that a whole different behavior of data streams?

Hello Nitay

When we speak about microbursts, the “micro” part of the word doesn’t refer to the amount of data that is being sent, but to a very small period in time. The characteristic of a microburst is a very large or extreme amount of data (much more than just 10 bits or 1500 bytes) in a very short time.

For example, if a microburst of say, 15 megabits of data occurs on a GigabitEthernet port for 10 ms, it will completely oversubscribe the port and fill up any buffers the port may have. This is because 15 megabits of data over 10 ms is equal to a throughput of 1500 mbits/s. So both the bandwidth of the port (1000 mbits/s) and the buffers (which are less than 500 mbits) would be oversubscribed, and frames would be lost. But this will happen only for 10 ms, so you only get drops during that small period of time. Does that clarify it for you?

I hope this has been helpful!

Laz

Thanks god, your explanation finally made that clear for me.
I really understood that “microburst” characteristic in the wrong way and nothing was making any sense.

So let me get it right:
A micro burst is a stream of data that oversubscribe the buffer of the device (switch / router) and it could happen from which options below:

  1. by 1gig link which transferring lets say 200Mbps and the output interface for that stream is some Fastethernet link - so the buffer will get that harsh microburst of 200Mb which is twice the bw

  2. two or more users sends 60Mbps of burst traffic over their links and for the same output interface which is some Fastethernet link

so does the 2nd option is correct as an example of a microburst? one burst of 60Mbps could be defined as “micro burst” alone or it has to be higher burst then the outgoing interface for the stream?

are there any other examples that could happen due to microburst excluding my examples?

Thanks you very much for your patient and help!

Hello Nitay

Glad I could be of help. Your first example describes a situation where you would have sustained high traffic. However, a microburst is characterized by its extremely small period of time. Specifically, a microburst is a throughput of traffic that is higher than the interface can handle for a very small amount of time.

Let’s say you have two switches connected to each other in the distribution layer of a network. It just happens that many users are downloading information and making requests simultaneously. This causes the link between the switches to experience speeds exceeding the port speed for several milliseconds, causing some frames to be lost, but only during those few milliseconds. After that, the speeds are well below that of the port. Here’s an example of what microburst traffic looks like on a graph.
image

I hope this has been helpful!

Laz

Thanks you Lazaros,
I think Iv’e got the idea of this topic after reading your explenation more carefully and thinking about the actual process of it.

In the deep technical process I can figure that the CPU and Memory controller of the devices will try to send some high ammount of data into the outgoing interface which will have to be transferred in its usuall speed.
If by anychance the CPU and Memory controller would try to send in that 1ms gap some high ammount of data like 100Mbps which proccessed through few Incoming interfaces in the same time - because the CPU cycles and Memory controller operate in a much higher speed then would a single Interface of that same device, then there is that possibility of which the outgoing interface could be 1GigaEthernet which can only transfer 10Mbps of data per 1ms gap thus making his own NIC Buffer explode with another 90Mbps of data which most of it will probably get dropped and the rest of it will have to wait that 1ms gap until the next 10Mbps of data could be transferred on the link.

So if my technical understanding is correct , then I could say that now I’m finally understanding this topic to the fully extent of it and the important logic behind of it.

1 Like