Rationale behind manual jitter calculation

Hello,

I’d like to ask about how to calculate jitter.

I know that jitter is variation in one-way delay, and I also know that networking devices can be configured to report jitter. But what I’m interested in is what the logic is behind the manual calculation of jitter.

One way to manually calculate jitter is to issue a few pings via the Command Prompt (if you’re using Windows), and then averaging the time difference between each consecutive packet:

Can someone please give a mathematical explanation of why this works?

I also gave it a try and this is what I’ve come up with:

Here, I’m issuing 3 pings. Letters “x”, “y”, and “z” stand for the round trip times (i.e. the two way delays), and letters “a” to “f” stand for the one way trips (i.e. the one way delays). As you can see, the time it takes for the first packet (“a”) to be received (by the “Destination”) plus the second packet (“b”) to be received (by the “Source”) is equal to “x”, so the round trip time (i.e. two way delay) is:

a + b = x = 50 ms

The difference between “x” and “y” is “c”, and “c” is the time it takes for a packet to arrive at the “Destination” (so not round trip time, i.e. not two way delay, but the one way delay). Meaning, the time that elapsed between the first round trip time and the second round trip time is equal to the time that elapsed in-between the two, which is “c”.

So the way we calculate every second packet’s (one way) delay (i.e. “c”, “e”, etc) is by taking the difference of the sums of two consecutive packets. Of course, when I say “every second packet’s”, I mean starting after the third packet: “c” is the third packet, and then the second packet after “c” is “e”, and then the second packet after “e” would be “g” (not shown in the picture), etc.

So that’s how we can tease out the one-way delay of packets “c”, “e”, etc.

Then, we take their average. We do that by adding them all up, and dividing the resulting number by a number that’s equal to how many numbers we’ve added together. In this example, we’ve added 2 packets, so we divide by 2. That gives us the average one-way delay of 10 ms, which is the jitter.

Please note that I’ve just made up some easy numbers for the example. For example, “z” could have been larger than “y”: meaning “z” could have been 40 ms, and “y” 30 ms, but that wouldn’t have changed the result, because their difference would still be 10 ms.

Is this the solution?

Also, let’s say I issued this ping:

C:\Users\test>ping 1.1.1.1

Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=29ms TTL=58
Reply from 1.1.1.1: bytes=32 time=29ms TTL=58
Reply from 1.1.1.1: bytes=32 time=29ms TTL=58
Reply from 1.1.1.1: bytes=32 time=29ms TTL=58

Ping statistics for 1.1.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 29ms, Maximum = 29ms, Average = 29ms

If the maximum RTT is 29ms, doesn’t that mean that the jitter must be smaller than 29ms? Because one instance of a RTT is the sum of 2 consecutive one way delays, so if their sum is 29ms, then they individually must be less than 29ms.

Thanks.
Attila

Hello Attila

Let me try to make the concept a little clearer for you. Jitter is the deviation from true periodicity. True periodicity is the attribute of being completely periodic. In other words, true periodicity is achieved when the time between events remains exactly the same.

Translating that to the receiving of packets, if you receive 10 packets, and you receive them at
intervals of exactly 29 milliseconds every time, then you have a jitter of zero. This is because there has been zero deviation from true periodicity.

Now if you receive 10 packets at varying time intervals, then there is a deviation, therefore there is jitter. But the most challenging question is, how do you measure it? Well, the truth is that there are different ways to measure it. If you want to get academic, take a look at the various metrics you can use to measure jitter here.

When using a utility like ping to measure it, you must take several issues into consideration. Are you measuring the jitter of the round-trip ping or of the arrival of packets at the destination? Are you measuring jitter over a period of time, or over a certain number of packets? Are you measuring the jitter of each packet compared to the previous and next packet or compared to all of the packet arrival intervals? ΅What is the value of the true periodicity that you are comparing arrival times to, the average or some absolute value? These are questions that must be answered before you manually measure jitter.

The links you shared make some assumptions before they actually do the calculations, assumptions that they don’t make clear from the beginning. However, the purpose of these manual calculations of jitter is not really to monitor a real network. This would not be useful in any way. But the purpose is to gain a deeper understanding of what jitter is.

Ultimately, you should use network monitoring tools to gain a view of the real jitter on a network to determine if it is causing problems that need fixing. More info on jitter and measuring and dealing with it using various tools can be found in these lessons:

I hope this has been helpful!

Laz

1 Like

Hello Laz,

Thank you very much for the thorough explanation - as always. :slight_smile:

If I understood you correctly, the point is that there are different ways of measuring jitter, and all of those different ways can result in different values. If the measurements are done correctly, then all of those different values can say something useful about the network (despite the fact that the values don’t match).

In light of this, I’d like to rephrase my question.

Let’s say “x” is the time it takes for the packet sent by the Source to arrive at the Destination (“a”), plus the time it takes for the packet sent by the Destination to arrive at the Source (“b”). So “x” is the round trip time of the first ping (a+b=x). “y” is the round trip time of the second ping (c+d=y). When we take the difference of “x” and “y”, do we get “c”?

So can we calculate how long it took the third packet (“c”) to arrive at the Destination this way? The logic being, that the time that elapsed between the arrival of the second packet (“b”) and the arrival of the fourth packet (“d”) is equal to “c”. Or is this line of reasoning completely on the wrong track?

Thanks.
Attila

Hello Attila

Yes that is correct. The resulting values will be useful for us to determine the state of the network. Ultimately, we should rely on network monitoring systems to do the work in real-time rather than having us do it manually using pings.

No. In mathematics, to solve for a particular variable, we must constrain the rest of the variables. Here, in your example of x-y=c, y is composed of both c and d, and d is also a variable. It is not constrained. To put numbers to the example, let’s say:

x = 50
c + d = y = 40
but let’s say c = 22 and d = 18 which makes the above equation correct.

So x - y = 50 - 40 = 10 but that does not equal c.

The above would be correct with these values too:

  • c=25 and d=15
  • c=19 and d=21
  • c=17 and d=23

and so on…

The idea of finding the difference between the ping delays (i.e. x-y=? and y-z=?) is one of the ways in which you can calculate jitter. It’s a description of the deviation from periodicity that we mentioned before. However, there is no relation between that deviation and the one-way delay indicated by c. Where did you get this approach from?

I hope this has been helpful!

Laz