Hello Rene,
after executing the command this is my results from the spoke 2 and hub routers
hub:
Miami#SH IP BGP NEighbors 10.100.252.114 ADvertised-routes
BGP table version is 5, local router ID is 10.100.254.22
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 0.0.0.0 198.136.220.33 0 32768 i
*> 10.6.0.2/32 10.100.252.6 0 0 65020 i
*> 10.100.0.4/32 0.0.0.0 0 32768 i
*> 10.114.0.2/32 10.100.252.114 0 0 65114 i
Total number of prefixes 4
spoke 2:
tampa#SH IP BGP NEIghbors 10.100.252.1 ADvertised-routes
BGP table version is 6, local router ID is 10.114.0.2
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
r> 0.0.0.0 10.100.252.1 0 0 65000 i
*> 10.6.0.2/32 10.100.252.6 0 65000 65020 i
*> 10.100.0.4/32 10.100.252.1 0 0 65000 i
*> 10.114.0.2/32 0.0.0.0 0 32768 i
Total number of prefixes 4
I’ve loaded my entire lab open for suggestion thanks. so SMY HUB IS THE PRIMARY ROUTE MIAMI HUB IS THE BACK ROUTE IN CASE SMY FAILED’S
hub smy:
hostname SMY
!
boot-start-marker
boot-end-marker
!
!
no aaa new-model
memory-size iomem 5
no ip icmp rate-limit unreachable
ip cef
!
no ip domain lookup
!
multilink bundle-name authenticated
!
archive
log config
hidekeys
!
ip tcp synwait-time 5
!
interface Loopback10
ip address 10.102.0.4 255.255.255.255
!
interface Tunnel0
ip address 10.102.252.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map multicast dynamic
ip nhrp network-id 102
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
no ip split-horizon
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface FastEthernet0/0
ip address 64.238.201.93 255.255.255.240
!
router bgp 65016
no synchronization
bgp log-neighbor-changes
network 0.0.0.0
network 10.102.0.4 mask 255.255.255.255
neighbor 10.102.252.6 remote-as 65020
neighbor 10.102.252.6 timers 7 21
neighbor 10.102.252.114 remote-as 65114
neighbor 10.102.252.114 timers 7 21
no auto-summary
!
ip route 0.0.0.0 0.0.0.0 64.238.201.94
!
end
miami hub:
hostname Miami
!
ip cef
!
no ip domain lookup
!
interface Loopback0
ip address 10.100.0.4 255.255.255.255
!
interface Loopback1
ip address 10.100.254.22 255.255.255.248
!
interface Tunnel0
description Miami HUB
no ip address
no ip redirects
ip mtu 1400
ip nhrp map multicast dynamic
ip nhrp network-id 100
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
no ip split-horizon
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface Tunnel1
ip address 10.100.252.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map multicast dynamic
ip nhrp network-id 102
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface FastEthernet0/0
ip address 198.136.220.36 255.255.255.224
duplex auto
speed auto
!
router bgp 65000
no synchronization
bgp log-neighbor-changes
network 0.0.0.0
network 10.100.0.4 mask 255.255.255.255
neighbor 10.100.252.6 remote-as 65020
neighbor 10.100.252.6 timers 7 21
neighbor 10.100.252.114 remote-as 65114
neighbor 10.100.252.114 timers 7 21
no auto-summary
!
ip route 0.0.0.0 0.0.0.0 198.136.220.33
!
end
spoke 1:
hostname homedale
!
interface Loopback0
ip address 10.6.0.2 255.255.255.255
!
interface Tunnel0
description Link to Miami Hub
ip address 10.100.252.6 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map 10.100.252.1 198.136.220.36
ip nhrp map multicast 198.136.220.36
ip nhrp network-id 100
ip nhrp nhs 10.100.252.1
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface Tunnel2
description to SMY hub
ip address 10.102.252.6 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map 10.102.252.1 64.238.201.93
ip nhrp map multicast 64.238.201.93
ip nhrp network-id 102
ip nhrp nhs 10.102.252.1
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface FastEthernet0/0
ip address 204.114.124.36 255.255.255.240
duplex auto
speed auto
!
router bgp 65020
no synchronization
bgp router-id 10.6.0.2
bgp log-neighbor-changes
network 10.6.0.2 mask 255.255.255.255
neighbor 10.100.252.1 remote-as 65000
neighbor 10.100.252.1 timers 7 21
neighbor 10.102.252.1 remote-as 65016
neighbor 10.102.252.1 timers 7 21
no auto-summary
!
ip route 0.0.0.0 0.0.0.0 204.114.124.33
!
end
spoke 2:
hostname tampa
!
interface Loopback0
ip address 10.114.0.2 255.255.255.255
!
interface Tunnel0
description link to Miami Hub
ip address 10.100.252.114 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map 10.100.252.1 198.136.220.36
ip nhrp map multicast 198.136.220.36
ip nhrp network-id 100
ip nhrp nhs 10.100.252.1
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface Tunnel2
description link to SMY hub
ip address 10.102.252.114 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map 10.102.252.1 64.238.201.93
ip nhrp map multicast 64.238.201.93
ip nhrp network-id 102
ip nhrp nhs 10.102.252.1
ip nhrp shortcut
ip nhrp redirect
ip tcp adjust-mss 1360
tunnel source FastEthernet0/0
tunnel mode gre multipoint
!
interface FastEthernet0/0
ip address 64.112.157.234 255.255.255.240
duplex auto
speed auto
!
router bgp 65114
no synchronization
bgp router-id 10.114.0.2
bgp log-neighbor-changes
network 10.114.0.2 mask 255.255.255.255
neighbor 10.100.252.1 remote-as 65000
neighbor 10.100.252.1 timers 7 21
neighbor 10.102.252.1 remote-as 65016
neighbor 10.102.252.1 timers 7 21
no auto-summary
!
ip forward-protocol nd
ip route 0.0.0.0 0.0.0.0 64.112.157.233
!
end
In your first post, you talked about not seeing the network from spoke1 on spoke2 right? But on spoke2, we now see 10.6.0.2/32 ?
Are you missing anything else?
Rene
PS - when you paste configs, would you please sanitize them (remove any junk that is not needed) and use the code button? I just did it for your configs, makes it a lot easier to read
It is possible to create a DMVPN topology where you have spoke routers assigned to different BGP AS’es. You can use the neighbor peer-group remote-as multiple times with different peer groups referencing different routers, and each one in a different AS.
This configuration is fine if you have two, or five or even 10 spokes, but what if you have 500? It’s not that scalable. So in general, the rule of thumb for larger networks is to either use iBGP across all of your DMVPN topology with a single AS, or use eBGP with the hub in one AS, and all the spokes in another AS, just like in the lesson.
You can group your spokes so that each group corresponds to a single AS, but the benefits of this may be somewhat limited. The groupings must have some characteristic in common, such as belonging to a particular city, or department, or company, so that this grouping and the resulting routing can have more meaning and usefulness.
Yes, that is correct. Remember that unlike other routing protocols, BGP requires that both the network and subnet mask combination exist in the routing table exactly before being able to advertise that route. Since there is a static default route in the routing table, this will be advertised using BGP because of this command.
You could use this command on the hub to get the same results as those in the lesson with the route-map applied. Essentially, spokes will only receive the default route from the hub in both cases.
Having said that, filtering all routes except the default route on the hub is generally not recommended in a DMVPN Phase 3 scenario with BGP routing because it can result in:
Loss of granularity: All spoke routers will receive only a default route, losing the granularity of the original routes. This may result in suboptimal routing and traffic tromboning, where traffic is unnecessarily sent to the hub before reaching the correct destination.
NHRP resolution failures: DMVPN Phase 3 relies on NHRP to establish direct spoke-to-spoke tunnels. Suppressing specific routes may cause NHRP resolution failures, preventing the formation of direct spoke-to-spoke tunnels and forcing all traffic through the hub.
Instead of using the aggregate-address 0.0.0.0 0.0.0.0 summary-only command, or using a route-map to filter out all but the default route, it is recommended to use other methods to filter or suppress specific routes such as:
Use route-maps, prefix-lists, or distribute-lists to filter only the specific routes you want to suppress or allow.
Route summarization: Use the aggregate-address command but with the appropriate address and mask values to summarize only the desired routes, allowing more granular control over the advertised routes.
The part about the loss of granularity and sub-optimal routing makes sense, for example, if each spoke had its own internet connection for traffic to be sent out but was receiving the default route from the hub.
But why would NHRP resolution failure occur? I assumed that as long as the hub learns routes from spokes, and spokes try to send unicast traffic via the hub to each other, then an NHRP redirect would be sent back to construct an inter-spoke tunnel when necessary.
Yes, you are correct. Intuitively, we tend to want to have more specific routes in order to achieve more efficient routing, however, in the case of DMVPN Phase 3, whether you have specific routes or a default route to the hub, the NHRP request occurs in the very same manner. Once a spoke has an NHRP resolution for a particular destination, it updates the NHRP cache and CEF table, which take precedence over the routing table, allowing direct spoke-to-spoke communication.
Just something that I’ve noticed when configuring BGP.
For each destination that is either installed into the RIB using NHRP or a destination whose next-hop is changed, a sub-entry is created under the relevant Spoke entry?
And one more thing. Why does NHRP also install a /32 route for the tunnel overlay IP address?
The output of the show dmvpn command shows multiple entries for the 2.2.2.2 spoke. The first entry has an attribute of DT1 while the rest have an attribute of DT2. What does this mean and why does it happen?
Well, looking at the legend, we can see that the D means dynamic (as opposed to statically assigned). The T1 and T2 indicators are the important factor here:
T1 (Route Installed): This attribute typically appears for the primary NHRP mapping. It indicates that the route for this particular network has been installed in the routing table of the router. This entry signifies that the hub has a direct route to the spoke via the NHRP network ID and that this route is actively being used for routing traffic.
T2 (Next-Hop Override): These additional entries with the ‘DT2’ attribute represent the ‘next-hop override’ feature in DMVPN. This feature allows the hub to direct traffic between spokes directly, bypassing the hub for data packets, allowing for ‘spoke-to-spoke’ communication.
Under what circumstances would multiple DT2 entries appear? Any event that changes the next hop or the route of a spoke to spoke communication may generate a new DT2 entry. This includes making changes to the config of the routers. Because it takes time for stale entries to be eliminated, for a time, they remain in the NHRP cache.
NHRP installs a /32 route for the tunnel overlay IP address to ensure that the network knows the exact path to the specific host on the other end of the tunnel. This is particularly important in a dynamic multipoint VPN (DMVPN) where there could be many spoke sites. The /32 host route allows for direct communication between the hub and the specific spoke without having to go through other spokes, which can improve efficiency and reduce latency.
We see this behavior often in scenarios where point to point, or point to multipoint topologies are present. (i.e. advertising point to multipoint networks in OSPF). Remember, DMVPN (depending on the Phase) is just a point-to-point or a point-to-multipoint GRE tunnel.