GLBP (Gateway Load Balancing Protocol)

Hello Shivam

Load balancing and load sharing two terms used generally in networking. They are often used interchangeably, but they do have slightly different meanings.

Load Balancing: This is a technique that distributes network traffic across multiple servers or paths to ensure no single server or path is overwhelmed. This can be done using various algorithms such as Round Robin, Least Connections, and IP Hash. The goal of load balancing is to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Load balancing is usually done by a dedicated device, like a load balancer.

Load Sharing: This is similar to load balancing, but it doesn’t necessarily distribute traffic evenly. Instead, it allows you to manually or automatically allocate traffic based on current network conditions, capacity, or other factors. In load sharing, traffic is distributed among multiple paths based on the bandwidth of the links. If one link has more bandwidth, it can carry more traffic compared to a link with less bandwidth.

Both of these concepts can be applied to technologies such as EtherChannel, routing protocols such as OSPF, EIGRP, and BGP, or to FHRP technologies such as VRRP, HSRP, and GLBP.

Remember, these terms are often used loosely, so ultimately, the meaning and definition of these terms should be determined based on the context of their use.

Now for the specific case of GLBP, the term Load Balancing is actually part of the protocol’s name, and in this context, it means evenly distributing the load that a default gateway would experience across two or more physical devices.

The threshold configuration has several components. First, you must determine how much of a decrement you will have for the failure of each link. In this particular case, Rene has configured a decrement of 20 for the weighting of each link, and this was done with the following configurations:

SW2(config)#track 2 interface GigabitEthernet 0/2 line-protocol
SW2(config)#track 3 interface GigabitEthernet 0/3 line-protocol

SW2(config)#interface Vlan1
SW2(config-if)#glbp 1 weighting track 2 decrement 20
SW2(config-if)#glbp 1 weighting track 3 decrement 20

So if either Gi0/2 or Gi0/3 fail, it will result in a decrement of 20. If both fail, it will result in a decrement of 40.

Keeping that in mind, Rene put in the range of values of 70 to 90 for which the role of AVF will be given with the following command:

SW2(config-if)#glbp 1 weighting 100 lower 70 upper 90

Why those values? Well, let’s see the logic:

  • if no links fail, the weight is by default 100 and SW2 will remain AVF
  • if one link fails, the weight is 100 - 20 = 80, and SW 2 will remain AVF
  • if two links fail, the weight is 100-20-20 = 60. This is less than the lower value of 70, therefore SW2 will no longer become AVF.
  • if one link comes back, the weight is now 100-20 = 80. This is less than the upper value, which means that there is no change. SW2 does not become the forwarder.
  • If the second link comes back, the weight is now 100. This is greater than the upper value of 90, therefor SW2 becomes the AVF again.

How do you decide on these values? you must decide what you want to happen when. Here, Rene has decided that:

  • If one link fails, SW2 should keep the same status. That is, if it is AVF, it remains AVF. If it is not AVF it does not become AVF.
  • If both links fail, SW2 should not be AVF, therefore it gives up this status.
  • If both links come back, SW2 should become AVF, so it will preemptively take that role.

Based on this desired behavior, the decrement, lower, and upper values are chosen. Does that make sense?

I hope this has been helpful!

Laz

1 Like