IP Precedence and DSCP Values

Hi There,

In the IP Precedence section, you had mentioned something like:
With the old 5-bit type of service bits, you could flip some switches and have an IP packet that requested low delay and high throughput. With the “newer” 4-bit type of service bits, you have to choose one of the 5 options. Good thinking, but the type of service bits have never been really used…

  1. Can you tell me what the term “you could flip some switches” means?
  2. Also can you explain why is it a problem when type of service bits 3 and 4 are set to a value of 1?
    bit 3: 1= Low Delay
    bit 4: 1= High Throughput.
    Now, I might be wrong but I wanted to understand when Throughput is higher, the delay in forwarding a packer is lower? or am I confusing myself?

Please explain.


Hello Adi

The term “you could flip some switches” that Rene used in the lesson is just a metaphorical way of saying “you could change some settings.” In this context, it refers to changing the value of the ToS bits in the IP header of a packet to request specific service characteristics. The point of the comment is that with the old way, you can actually create a scenario where you can set the priorities to have bits 3 and 4 set to 1, which is essentially an IP packet that requests low delay and high throughput.

The problem with setting both bit 3 and bit 4 to a value of 1 is that it might create a contradiction in the network. Low Delay and High Throughput are often seen as mutually exclusive in networking.

High throughput means that the network is trying to maximize the amount of data that can be sent over a given period of time, which might involve using techniques like buffering or packet scheduling that could introduce delays. On the other hand, low delay means that the network is trying to minimize the time it takes for a packet to travel from the source to the destination, which might involve bypassing these techniques and therefore reducing the overall throughput.

In an ideal scenario, high throughput would indeed mean low delay as more packets are being forwarded efficiently. However, in a real-world scenario, achieving high throughput might involve techniques that could potentially introduce delays. Hence, it’s generally considered a best practice to prioritize either throughput or delay, but not both. Does that make sense?

I hope this has been helpful!