This topic is to discuss the following lesson:
Hi Rene,
Thanks for your video. What if I want a hub and spoke topology, but I still want a communication between the Spokes (Through the HUB).
In this example we see that the communications between spokes is now impossible.
Thanks
Nicolas
Hello Nicolas
I think there may be confusion as far as what we mean when we say hub-and-spoke.
The default behavior of an SD-WAN topology is full mesh. This means that all sites can communicate with each other directly. The hub and spoke topology that Rene mentioned in the lab refers to the restriction of allowing each vEdge to communicate ONLY with the central site, and not communicate with any other vEdge device. This is not to be confused with hub and spoke topologies such as DMVPN for example.
What you are suggesting is a third option of operation, which is to force the SD-WAN topology to function as a hub-and-spoke topology and have all communication between sites routed through the hub site rather than directly between vEdge devices, correct? Kind of like a DMVPN Phase 1 situation.
You can direct traffic within the SD-WAN topology to force it to behave in a hub-and-spoke manner using various methods including the VPN topology configuration, routing, data policies, as well as policies that manipulate OMP route advertisement to influence the SD-WAN overlay. SD-WAN was not designed for this, but it can be done. What method you choose really depends upon what you want to achieve.
The question I have however is why would you want to do this? The purpose of this particular lesson is to restrict communication between vEdge devices, not to actually create a hub and spoke topology where traffic is routed through the hub. If you can answer the why, then we can then move on to further discuss the appropriate solution.
I hope this has been helpful!
Laz
Hello Laz,
Thank you for this detailed answer.
Imagine I have only MPLS underlay with different SP (many MPLS SP) tow sites each attached to an MPLS SP that need to communicate together through the hub (that have connectivity to both MPLS SP must be able to receive the routes of each other’s for example. Adding a default route could be a solution right?
Thanks ![]()
Nicolas
Hello Nicolas
OK I understand. So by design, your initial “underlay” network is such that there are parts of the network that can’t communicate directly. So you have multiple MPLS service providers, and you have a particular site that is connected to two (or more) MPLS networks that acts as the “hub” for communication between entities in those MPLS networks.
So if you have topology where two vEdge devices cannot communicate directly over the underlay due to the restrictions you describe, but can each communicate with the main site where the controllers are hosted, the Cisco SD-WAN fabric will inherently handle the situation to a certain extent. The system’s OMP and TLOC properties will play a role in determining viable paths.
If direct communication between two vEdges is not possible due to underlay restrictions, the tunnel establishment will fail. However, even if a direct path isn’t available, the vEdge devices are aware (thanks to OMP and TLOC information) of other vEdges they can communicate with. In your scenario, both restricted vEdges can communicate with the main site. Thus, when they need to exchange data, the traffic will inherently use the main site as a relay point since a direct path isn’t viable. This is part of the SD-WAN’s inherent path decision mechanism.
So based on the OMP path attributes, the vEdge will choose the best available path. If the direct path is unavailable, it will select another path, like through the main site.
I hope this has been helpful!
Laz
Hello,
Thanks for the lesson, but I do have a doubt. I am labbing a HUB and Spoke topology on SD-WAN and I am having a hard time on the design. Here is my idea:
I can get the HUB up on the SD-WAN fabric overlay, with all control connections successfully established. But when I get into the SPOKE, I ran into the folloeing issue:
- The Spoke router can ping 68.1.1.1 (the HUB) and also 214.15.10.254 (G0/0/0.214 on the HUB), but I cannot ping the vBond or non of the control elements from the Spoke.
- Same applies viceversa, I can ping 68.1.1.1 from the control elements but not 68.1.1.2 (the Spoke). Why?
I configured a default static route pointing to 68.1.1.1 on the Spoke router. I was expecting the simplest routing: the Spoke tries to reach 214.15.10.2 (the vBond) and the packets get to the HUB (which is happening). Then the HUB sees that the destination of the packets is directly connected and proceeds to route them. Why is this not happenning, if both subinterfaces are on the same VPN - VRF?
I appreciate any help, I have stuck with this for many hours. Thanks,
Jose
Hello Jose
I can give you some guidelines that will help you in your troubleshooting process, and hopefully get you unstuck from your hours of searching.
Short answer: you’re trying to hairpin controller traffic through a WAN Edge, and that won’t work the way a normal router would. In Cisco SD‑WAN, controllers must be reachable from each WAN Edge directly over the transport (VPN 0). A WAN Edge is not a general-purpose transit router in VPN 0, so it won’t route and forward your spoke’s underlay traffic to a service-side controller that’s behind or reachable through the hub. Let’s analyze each communication:
- Spoke can ping 68.1.1.1 (hub) because that’s the hub’s transport IP (VPN 0) and the two WAN edges have underlay reachability.
- Spoke can ping 214.15.10.254 (hub subinterface) because that address is directly on the hub itself, so the hub can respond locally.
- Spoke cannot ping 214.15.10.2 (vBond behind the hub): the spoke forwards the packet to the hub in VPN 0, but the hub does not act as a transit router to forward VPN 0 traffic out toward a service-side network (or even between VPN 0 interfaces for transit). The packet dies at the hub.
- From the controller side, you can ping 68.1.1.1 (hub), but not 68.1.1.2 (spoke), for the same reason in the reverse direction: the hub won’t forward that controller-originated traffic across its VPN 0 toward the spoke.
All WAN edges initiate DTLS/TLS control connections to vBond, vSmart, and vManage from VPN 0 directly across the transport (Internet/MPLS/etc). The controllers must be reachable in the underlay. Do not place controllers behind a WAN Edge and expect other WAN Edges to reach them through the overlay or by transiting another WAN Edge’s VPN 0.
So what should you do? There are a couple of options, depending on if you want to simply lab it and get it working, or if you want to apply this to a production network:
Option A (best for production networks):
- Put the controllers on the transport so they are reachable from every WAN edge over VPN 0.
- Give controllers public/transport-reachable IPs, or
- Place them behind an underlay L3 device (not a WAN Edge) that provides routing/NAT so any WAN edge can reach them from VPN 0.
Option B (lab quick fix):
- Keep the hub WAN Edge as-is, but add a separate L3 router or a cloud segment at the hub site that routes between 68.1.1.0/24 and 214.15.10.0/24. Point the spoke’s default route to the underlay/cloud, not to the hub WAN Edge. The hub should not be the transit for controller reachability.
A couple of things to keep in mind:
- Don’t use a WAN Edge to forward transit traffic in VPN 0. VPN 0 is reserved for transport/TLOCs and control connections, not for general underlay routing through the edge.
- Do not place controllers in a service VPN (e.g., VPN 10/214) and expect edges to reach them via the overlay before control is up. It’s a classic chicken-and-egg problem. Control must come up first, via VPN 0.
Does that make sense? This should get you thinking about your topology and the possible changes you can make. Let us know how you get along, so we can help you out further if you need it.
I hope this has been helpful!
Laz
Thanks very much Laz, all clear now! However, I will like to point that:
Place them behind an underlay L3 device (not a WAN Edge) that provides routing/NAT so any WAN edge can reach them from VPN 0.
Careful with placing them behind a NAT device, specially all three elements sitting behind the same NAT, because problems arise since the vBond communicates with the other control elelments using their “inside” IP addresses and therefore that is the IP address that returns to each WAN EDGE attempting to join the overlay from the outside!! The Wan Edge gets an “inside” IP address of the control elements and therefore it cannot reach them!
But regarding the rest of the answer, thanks a lot, it saved a lot of troubleshooting hours!
Hello Jose
I’m glad it was helpful!! And yes indeed, you’re right, I appreciate you clarifying that. The idea was to place a device there that would perform the routing necessary to allow any WAN edge to reach the controllers from VPN 0. The possible use of NAT on that device was just an off the cuff mention of it. In any case you’re right, NAT is best avoided at that point!
Thanks again!
Laz
Hello again,
I am labbing a topology with mutliple hubs for redundancy. I tuned the TLOC preference via a centralized policy to have the SPOKES prefer a specific hub based on their site-id. I also filtered all the SPOKE TLOCs so the only IPSEC tunnels are from SPOKE to HUB, there is no SPOKE-TO-SPOKE tunnels. All good. But now I have a doubt:
I want to configure end-to-end path tracking because if an indirect failure occurs, the spokes keep sending traffic to their preferred hub. I will like to use a TLOC Action type primary where I can set up multiple ultimate TLOCs so that if the vSmart detects that a HUB_1-SPOKE_X connection goes down, then it “tells” the remote spokes to not use HUB_1 anymore to reach SPOKE_X. I know this can be done with one ultimate-TLOC (for example, HUB_2 TLOC). But what if I add 2 more HUBS? Can I have 3 ultimate TLOCS? If so, how can I tune the order of failover? In my organization for example, we have 6 HUB routers and hundreds of spokes… so I will like to have the possibility of 5 failover HUBs (but each with a different preference) for every spoke in an indirect failure scenari.
Thanks,
Jose
Hello Jose
Oh this is getting good!
This is really getting into the heart of multiple hubs and redundancy! Let’s take a closer look…
To answer your question directly, yes, it is possible to have more than two hubs and to have multiple backup hubs for each spoke. This can be done by issuing multiple TLOCs in the tloc-action section of the policy.
policy
centralized-policy POLICY_NAME
data-policy DATA_POLICY_NAME
vpn-list VPN_LIST
sequence 10
match
source-ip SPOKE_X_PREFIX
action accept
set
tloc-action primary
tloc DEST_HUB_1_IP color mpls encap ipsec preference 1
tloc DEST_HUB_2_IP color mpls encap ipsec preference 2
tloc DEST_HUB_3_IP color mpls encap ipsec preference 3
tloc DEST_HUB_4_IP color mpls encap ipsec preference 4
tloc DEST_HUB_5_IP color mpls encap ipsec preference 5
tloc DEST_HUB_6_IP color mpls encap ipsec preference 6
The preference (lower number is preferred with 0 the highest preference) here is used to allow you to choose the order in which the fallback process will choose the hubs.
Now you must keep the following in mind:
- You can have a maximum of 8 TLOCs per policy
- The more TLOCs you have the longer the convergence time
Let us know how you get along in your configuration!
I hope this has been helpful!
Laz
Some excellent resources that can help you out are the following:
Thanks Laz, all clear now!
