When configuring PIM sparse-mode, it is only necessary to enable this feature on the interfaces of the routers that connect to other routers that participate in PIM sparse mode. By enabling this command on the pairs of interfaces connected to each other, such as Fa0/0 on both R1 and R2, Fa0/1 on both R2 and R3 and Fa0/0 on both R3 and R4, we cause the routers to become neighbors and receive each other’s PIM hello packets. The loopbacks don’t need this config and are only used as a method of communication with each RP. For more info about PIM sparse mode, take a look at this lesson:
hi Lazaros
have anice day , just do we need to activate sparce mode on loopack be used in in msdp , or only in in ip address used in RP address in our example
23.23.23.23, thanks in advance
Sparse mode need only be enabled on the interfaces that connect to other multicast routers. This is because any communication with PIM hello packets need only be sent via those interfaces, and not the loopback interfaces.
but then you look on R3 for the debug output. I think debug was supposed to be on R3.
Also, on the very last command:
R3#show ip mroute 239.1.1.1
I think the two interfaces are reversed for the (192.168.12.1, 239.1.1.1) route entry. My router showed them the other way around, which seems consistent with the diagram.
I’m a longtime fan of The GNS3Vault and followed it over here to find you.
I’m currently building a mirror of this lab and instead of using a static RP configuration on routers (R2&R3) I’m using AutoRP for both Announcements(224.0.1.39) and a Mapping Agent(224.0.1.40). Prior to joining this site today, I was able to learn the two msdp statements by simply reading my fellow forum members responses with you.
My lab is R1(Receiver) - R2(ip pim send-AutoRP-announce loopback 0 scope 20) - R3(ip pim send-AutoRP-discover loopback 0 scope 20) - R4(ip pim send-AutoRP-announce loopback 0 scope 20) - R5(Source - only interface ip igmp join-group 224.1.1.1 and ip igmp join-group 225.2.2.2)
All routers use your OSPF configuration and all are globally configured for ip multicast-routing and every interface is configured with ip pim sparse-mode.
The challenge that I’m faced with is that while the Mapping Agent sees both RPs and selects my R4 for the RP, R1 still can’t reach R5.
Is there a reason for this when they both should be talking but are not?
Many Thanks in advance,
As a result the R1 never pings the two multicast ip addresses located on R5.
R1: G0/0 192.168.12.1/24
R2: G0/0 192.168.12.2/24
R2: G1/0 192.168.23.1/24
**R2: Lo0 34.34.34.34/32**
R2: Lo1 22.22.22.22/32
R2#sh run | i msdp
ip msdp peer 44.44.44.44 connect-source Loopback0
ip msdp cache-sa-state
ip msdp originator-id Loopback1
R2#
R3: G0/0 192.168.23.2/24
R3: G1/0 192.168.34.1/24
R4: G0/0 192.168.34.2/24
R4: G1/0 192.168.45.1/24
**R4: Lo0 34.34.34.34/32**
R4: Lo1 44.44.44.44/32
R4#sh run | i msdp
ip msdp peer 22.22.22.22 connect-source Loopback0
ip msdp cache-sa-state
ip msdp originator-id Loopback1
R4#
R5: 192.168.45.2/24
My answer was in the last line of my post above!
I had the connect-source loopback confused with the originator-id loopback!
The shared ip address is lo0 meant for the purpose of originator-id and the unique identifier is the connect source loopback, lo1!!!
Thank you for teaching me the usage of msdp!!
Anyone and everyone is welcome to use my lab above as it’s different from Rene’s where R3 is only the Mapping Agent 224.0.1.40 and not announcing itself as a RP.
Thanks so much for sharing the solution that you found on your own (Way to Go!!). That helps a lot and that’s part of the reason why the forum is so useful for everyone. If you have any other questions you always know where to find us.
If you have more than two RPs, then the configuration would be similar. All of the RPs would have to be configured with a loopback with the same IP address, and all RPs would have to be MSDP peers of each other in a full-mesh. This means that each RP would have to have an ip msdp peer command for each other RP on the network.
Anycast, in both IPv4 and IPv6, simply allows you to configure the same IP address on two different devices on the network. Routing will be achieved in such a way so that the “closest” destination with that address will be reached. In the lesson, when routing to 23.23.23.23, which is our anycast address, R4 reached R3 which was closest, and R1 reached R2 which was closest. There is no specialized anycast address, nor do we configure an address to be anycast in any way. We simply configure two (or more) devices/hosts on a network, with the same address.
Now once you do that, you can then advertise that network using any routing protocol you like, including BGP. You simply advertise it in the same way that you would any other prefix.
But you must take care. Anycast must be used only in particular applications. Otherwise, you could have unpredictable results.
For example, DNS uses anycast. Google uses the 8.8.8.8 IPv4 address for its DNS service. They have DNS servers spread out around the world, but they all use this same address. When your computer in France, for example, sends out a request for a DNS resolution, it is routed to the closest DNS server, in a data center in Paris. If your friend’s PC in New Zealand sends out a request, it will go to that same IP address, but to a server in Auckland, which is the closest server.
Anycast is useful whenever you have geographically remote servers that provide the same exact service to multiple regions. In order to maintain a consistency of service, however, you must make sure that those servers sharing the Anycast address deliver the same service.
So Anycast is not something you would typically configure within an enterprise network but is used more often in services delivered over a geographically dispersed region.
Yes, it is possible to implement Auto-RP in conjunction with Anycast RP in multicast routing. However, it requires some additional configuration to make them work together seamlessly.
Initially you must configure the Anycast RP on all RPs sharing the same IP address as well as MSDP between all Anycast RPs to share multicast source information, just like in the lesson.
Then, you can configure an Auto-RP mapping agent functionality on one or more designated routers. The mapping agent listens for RP-announce messages and selects the best RP for each multicast group. It then advertises this information using RP-discovery messages. Make sure to configure the Anycast RP IP address as the RP in the Auto-RP configuration. This way, the routers in the PIM domain will learn about the Anycast RP through Auto-RP messages.
For BSR configurations, the idea is similar. Configure Anycast RP and MSDP as in this lesson. You can then configure PIM-SM on all routers within the PIM domain and enable BSR on one or more routers. This will allow the routers to automatically discover and select the best RP for each multicast group based on the BSR messages. Configure the Anycast RP IP address in the BSR candidate-RP configuration and a priority value. This will ensure that the routers in the PIM domain will learn about the Anycast RP through BSR messages.
So MSDP is needed to interconnect 2 multicast domains. In which situation would you like to join these domains?
Do you have a configuration example please?
I came across a question in a guide:
When designing interdomain multicast, which two protocols are deployed to achieve communication between multicast sources and receivers? (Choose two.)
A. IGMPv2
B. BIDIR-PIM
C.MP-BGP
D.MSDP
E.MLD
Within a site/network, we use multicast using PIM sparse/dense mode:
MSDP is used to connect different multicast routing domains, so it allows inter-domain multicast routing. MSDP allows RPs (Rendezvous Points) in different sites to share information about active sources. This is typically used between two sites or data centers.
Currently we have aruba VSX L3 switches that stand as core devices, two each data center. It is a really big project and one core is already learning 21,000 multicast routes even though the cameras are just 8,000. The technology of VSX is that it replicates the routing table of its peer. Now my problem is there are like hundreds of cameras not recording(multicast not working) and working only using unicast. After reading this section I think this is the answer. I just need your advise if its safe to configure MSDP all together with the current setup of PIM sparse and BSR? I’m just worried if the limit of 32k multicast routes might reach per core, it might crash or something.
From a network engineering perspective, it seems like you’re dealing with a high volume of multicast traffic, which is causing some issues with your Aruba VSX L3 switches. You’re right in considering MSDP as a potential solution to your problem.
MSDP is typically used in conjunction with PIM-SM to share information about multicast sources between different multicast domains. This could potentially help you manage the high volume of multicast traffic in your network more effectively.
However, it’s important to note that implementing MSDP should be done with caution. It’s a complex protocol and it could potentially cause more issues if not configured correctly. First, you should thoroughly understand how MSDP works and how it will interact with your existing PIM and BSR setup. Also, consider the potential impact on your network’s performance and stability, especially given your concern about reaching the limit of 32k multicast routes per core.
I would recommend testing this setup in a controlled environment before deploying it in your live network. You might also want to consider reaching out to your vendor’s technical support team to help you through this process.
Remember, network changes of this magnitude should always be approached with a detailed plan, including a rollback strategy in case things don’t go as expected.