Multicast PIM Auto RP

Hello Michael

A response of (*, 239.0.78.141), 1w3s/00:02:54, RP 0.0.0.0 tells us that an RP has not been defined. Either enable auto RP or manually define the RP in order to have that information added.

I hope this has been helpful!

Laz

Thanks Laz that makes sense, many thanks.

1 Like

A post was merged into an existing topic: VRF Lite Configuration on Cisco IOS

Dear Rene,

In the configuration example, R3 is getting the rp mappings before we configure the ip pim autorp listener command. That means R3 know the RP address. So R3 should be able to serve if R4 wants to join a feed which is coming from R1 (as RP). But it is not until we configure ip pim autorp listener command on R3. Why is that?

Hello Roshan

If R4 is a host that wants to participate in a multicast group, then yes you are right, it can receive all the necessary information for such a case. However, we donā€™t want R4 to function as a multicast participant, but as a multicast router that will allow hosts on networks connected to it to participate in multicast. In this case, R4 will not ā€œknowā€ anything because it is behind R3 and is not receiving any RP mapping packets. This is because we are using PIM sparse mode which means that it would only be forwarded when requested by a downstream router.

The solution is to use the ip pim autorp listener which ensures that traffic sent to 224.0.1.39 gets flooded with dense mode. Once enabled, packets will be flooded to R4.

I hope this has been helpful!

Laz

Thank you Laz,
I think I have come across a bug or something. Or may be I had to clear the mroute chache at that time. It worked after some how anyway. Tx

1 Like

Hi Rene and team,
I tried to replicate a similar lab to the one in this lesson, but Iā€™m getting some unexpected results.

Hereā€™s the topology:

All RTR# devices have their interfaces configured with ip pim sparse-mode, and all RTR# devices have ip multicast-routing enabled. Also, OSPF is enabled on all RTR# interfaces in area 0. RTR# devices are OSPF neighbors on their directly connected interfaces.

RTR3ā€™s loopback0 interface (3.3.3.3) is configured to be the RP: RTR3 is also the configured mapping agent and an RP candidate:

interface Loopback0
ip address 3.3.3.3 255.255.255.255
ip pim sparse-mode
!

ip pim rp-address 3.3.3.3
ip pim send-rp-announce Loopback0 scope 16
ip pim send-rp-discovery Loopback0 scope 16

I did a show ip pim rp mapping on RTR1 and expected to see it empty (because all interfaces are configured to be in sparse mode). RTR2 is getting the RP discovery messages sent to 224.0.1.40 from RTR3 since itā€™s directly connected to RTR3, but RTR1 is some how still learning the RP to be 3.3.3.3 from R2. I did a debug ip pim auto-rp on RTR1 to confirm this.

RTR1#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
RP 3.3.3.3 (?), v2v1
Info source: 3.3.3.3 (?), elected via Auto-RP
Uptime: 00:17:49, expires: 00:01:58
RTR1#

 

RTR1#
*Feb 14 16:03:41.565: Auto-RP(0): Received RP-discovery packet of length 48, from 3.3.3.3, RP_cnt 1, ht 181
*Feb 14 16:03:41.565: (0): pim_add_prm:: 224.0.0.0/240.0.0.0, rp=3.3.3.3, repl = 1, ver =3, is_neg =0, bidir = 0, crp = 0
*Feb 14 16:03:41.565: Auto-RP(0): Update
*Feb 14 16:03:41.565: prm_rp->bidir_mode = 0 vs bidir = 0 (224.0.0.0/4, RP:3.3.3.3), PIMv2 v1
RTR1#

RTR2ā€™s show ip mroute shows that it is sending the auto-RP discovery messages over an SPT tree to RTR1 via its Ethernet0/1 interface. Iā€™m not sure why that is, though.

RTR2#show ip mroute

<output omitted>

(3.3.3.3, 224.0.1.40), 00:21:35/00:02:12, flags: LT
Incoming interface: Ethernet0/2, RPF nbr 192.168.2.2
Outgoing interface list:
**Ethernet0/1**, Forward/Sparse, 00:21:35/00:02:31

Meanwhile at the other end of the topology, RTR4 does not appear to be forwarding the RP-discovery messages it receives on Et0/3 over to RTR5 on its Et0/0 interface:

RTR4#show ip mroute

<output omitted>

(3.3.3.3, 224.0.1.40), 00:49:08/00:02:25, flags: PLT
Incoming interface: Ethernet0/3, RPF nbr 192.168.3.1
**Outgoing interface list: Null**

RTR5 doesnā€™t seem to know about the RP:

RTR5#sh ip pim rp mapping
PIM Group-to-RP Mappings

RTR5#

Hello Sam

This is an interesting question that you bring up, and it is not readily clear as to why you see this behaviour. After spending some time with Rene labbing this up, the reason for this behaviour has been made clear, and we have all learned something in the process!

It is true that if you have R3 become the mapping agent as in your topology, it will send out its RP mapping advertisements to the 224.0.1.40 address, which means these advertisements should only reach R2 and R4 since weā€™re using sparse mode. However, you will find that if R2 is the DR for the link between R2 and R1, then it too will forward the mapping advertisement to R1. R2 will operate in dense mode for the 224.0.1.40 group. If it is not a DR for that link, then it will not forward it.

You can see this clearly in the following Cisco documentation:

Specifically it states:

All the PIM routers join the multicast group 224.0.1.40 when the first PIM-enabled interface comes up. This interface is seen in the outgoing interface list for this group if it is the Designated Router (DR) on that PIM Segment.

For your specific topology, R2 is indeed the DR for the R1-R2 segment since I assume it has the higher IP address, so it will forward the mapping messages. Conversely, R5 is probably the DR for the R4-R5 segment since R5 probably has the higher IP address, so R4 will not forward the advertisement. Try to change the DR priorities on the R4-R5 segment and see if that will cause the behaviour to change.

In any case, itā€™s not a good idea to simply depend on this behaviour of the forwarding of mapping advertisements by the DR. This is why two solutions have been provided to allow the RP to be learned by all multicast routers in the topology: PIM Sparse-Dense mode and PIM Auto-RP listener.

I hope this has been helpful!

Laz

1 Like

Hi Rene/Laz,

Once the MA has been configured, why does the mcast routing table on R2 only show GigabitEthernet0/1 as the outgoing interface for the (2.2.2.2, 224.0.1.40) entry (RP Mapping packets)? Should it not also have GigabitEthernet0/2?

Hello Bhawandeep

When looking at the multicast routing table, the outgoing interface list shows the interfaces for which either a PIM join or an IGMP membership request. It doesnā€™t signify the interfaces from which the RP mapping packets are being sent. RP mapping packets are sent out of all multicast-enabled interfaces.

Similarly, you will see that when R1 is configured to advertise itself as an RP, the STP entry for 224.0.1.39 actually has Null as the outgoing interface list. Again, this does not mean that it is not sending out the RP announcements. RP announcements are sent out of all multicast-enabled interfaces.

I hope this has been helpful!

Laz

Hi Laz. That makes sense. I was thinking the OIL determines the interfaces on which RP mapping packets are sent. Thank you for clarifying.

1 Like

The candidate RP and the mapping Agent could be on the same device? The election of the RP depends on what? What is the best practice to choose the RP? I guess that an RP should be a device that is located at a central point of the network but Iā€™m not 100% sure.

Regards
Rodrigo

Hello Rodrigo

It is possible to configure a router to be both a mapping agent as well as a candidate RP. Simply configure both the ip pim send-rp-discovery command as well as the ip pim send-rp-announce command on the same device.

The mapping agent will select one RP per group. As stated in the lesson:

When two candidate RPs want to become RP for the same group(s) then the mapping agent will prefer the RP with the highest IP address.

In general, it doesnā€™t matter where the RP is. The RP is needed only to start new sessions, so the RP experiences little overhead from traffic flow or processing. However, the mapping agent should be placed centrally in order to facilitate RP elections, especially in a hub and spoke topology.

I hope this has been helpful!

Laz

2 Likes

hello Laz/Rene,
in a production multicast network, where you want to use Sparse-Mode and Auto-RP, would you configure ip pim autorp listener on all the multicast Routers, or just on the strictly needed ones, like in this lesson? I guess the first option, but I am not sure.
Thanks

Hello Giacomo

Ideally, you should apply the command only where it is needed. This way you can avoid any unnecessary dense mode flooding of the Auto-RP groups of 224.0.1.39 and 224.0.1.40. However, keep in mind that even if you do apply the command in your whole topology, it is only these two groups that are treated using the dense mode flooding. In most topologies, this would not cause an issue with overhead in bandwidth, CPU, and memory.

I hope this has been helpful!

Laz

I skimmed through the comments but maybe I missed this information. Can we have multiple mapping agents? If yes, do all of them get used at the same time? If No, how is the active mapping agent selected?

In the case of Bootstrap a similar role is performed by BSR router and in case of multiple BSRā€™s first the priority is compared and if it turns out to be the same then the candidate BSR with highest IP address is selected (correct me If I am wrong)

Hello Muhammad

Within a single PIM domain, you can have more than one router configured as a mapping agent, however only one router will be active at any time. The following command enables a router as a mapping agent:

ip pim send-rp-discovery loopback 0 scope 10

This specifies the loopback 0 as the source of the mapping agent, and a TTL of 10. The scope essentially determines the maximum distance in number of hops that this router will function as a mapping agent.

Now if there is more than one router configured as a mapping agent within a PIM domain, then each mapping agent makes this announcement independently of the decisions of the other mapping agents.

Concerning BSR, yes you are correct. As Rene states in his Multicast PIM Bootstrap (BSR) lesson:

There can be only one active BSR in the network. BSR routers will listen for messages from other BSR routers, the BSR router with the highest priority will become the active BSR. The highest BSR priority is 255 (the higher the better). When the priority is the same then the router with the highest IP address will become the BSR.

I hope this has been helpful!

Laz

I got general concepts but have more specific questions. It seems it can be said (for simplicity) that IGMP is used for receivers to tell first hop routers they need specific multicast; PIM is needed for routers to let source or RP know that they want to receive particular multicast that their clients requested; the multicast-routing needs to be enabled so routers can use routing tables to direct multicast traffic when they start receiving them. They questions are as follows:

  1. So IGMP traffic never passes beyond the first hop routers, there is no reason for that, right? It should be limited to local segment. Correct?
  2. Aside from running sparse-dense mode or enabling RP listener to allow RP calls to 224.0.1.39,40 to pass through it seems this traffic should pass also through the router without either of these two features if it is a DR. Since there is always IGMP present when multicast-routing is enabled it means there is always one router that is DR on any segment between two multicast enabled routers. So a long as RP is ā€˜eastā€™ of the DR there is no need to do anything else, RP agent traffic will always pass through. And if it is not then manually increasing DR priority of the routers closest to RP will do the trick. Better or worse than rp listener it looks it should work. By default in about 50 percent of cases and with DR priority manipulation 100%. Is there fault in this view?
  3. In case interested router canā€™t get any info for RP it seems it just sit and does nothing. But what if it connected, running the multicast and then RP crashed - will it just sit and wait till RP announce itself again or it will try still to reach it? This question is because Iā€™ve seen static RP configured on the router pointing to itself while it is using auto-rp. Seems the idea is that if auto-rp is gone the router will revert to static RP (which has no groups assigned to it) and will not try looking for RP. But will it be looking for it, indeed? If not, then such configuration seems just cluttering an interface.
  4. Finally, are there cases when router would start sending IGMP traffic to RP? Or anywhere? Thats kind of reiteration of the first question. The reason is thats what I saw on my network - one router which was supposed to be configured as RP, for one reason or another was not. It was also collecting some dropped packets on interface. Running debug showed to my surprise that dropped packets where IGMP packets on 224.0.1.40. From several remote routers, several hops away. I kind of dumbfounded now - I never expected that routers would send IGMP join for 224.0.1.40. The router in question is not RP and it is only connected to other routers. Seems I have critical flow in my understanding. Also, neither of routers involved are configured with ā€˜igmp join-groupā€™. Maybe these packets were dropped by router because its not really configured for RP or even pim (though I would think they would be discarded by CPU, not dropped in interface) but why it would receive them in a first place, particularly from the routers that are far away?

Hello Vadim

Yes that is correct. IGMP operates between a multicast host and a multicast router on the local network segment. Initial IGMP packets are sent to the 224.0.0.1 all hosts multicast address to find the local multicast router. This multicast address is not routable, which means it can never be used to reach some device outside of the local subnet.

Iā€™m not sure what you are referring to here. What is ā€œthis trafficā€ that you are referring to, the IGMP join message? If so, then yes, what you describe seems to be correct, but as you say, it isnā€™t considered best practice as it could be difficult to administrate.

Remember that the RP is there just to initiate the multicast traffic. Once that communication is established, the host may find a better route to the multicast source, bypassing the RP completely. So if the RP fails, it will not affect any of the already established multicast sessions. Take a look at the detailed explanation in the Multicast PIM Sparse Mode lesson that describes this process.

Hmm, thatā€™s interesting. Can you share your packet capture with us so we can take a look as well? And let us know a little bit more about your topology? That way we can respond more responsibly and correctly.

I hope this has been helpful!

Laz

Thanks for explanation. For item ā€˜2ā€™ - I meant the PIM Auto-RP registration traffic. Thats it looks like if router is DR on a segment between two routers then if its neighbor sends traffic to 224.0.1.40 it will pass through even if DR is not explicitly running in dense mode or has auto-rp listener enabled.
On a side note, the multicast topic seems more complicated then it looks like and it looks already rather complicated. Too many ā€˜if thenā€™ conditions and ā€˜gotchasā€™. Which is a ā€˜point in caseā€™ when discussing the IGMP issue with routers (I call it issue till I truly understand whats going on). The capture of the debug on the router that drops the IGMP packets from other routers is below:

debug mfib netio drop location 0/0/cpu0:
LC/0/0/CPU0:Nov 23 10:12:08.417 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.10, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:08.437 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.10, 224.0.1.39) ttl-1 prot-2 matched (*, 224.0.1.39/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.267 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.33.95, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.884 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:10.884 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.557 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.926 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.9, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.956 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.587 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.646 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.204, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]

The simplified diagram for routers is as follows (all routers behind the MPLS cloud are several hops away, based on the traceroute sent from them):


Router in the center is the one where IGMP drops are observed, its ASR9000. Two routers on top are directly connected to it and are NX7000. The other routers sending IGMP are a mix of IOS and XE.
What it sounds like, based on additional googling, etc., is that it when router gets it PIM enabled it also gets IGMP enabled and when it does it sends IGMP report for group 221.0.1.40. It does not send PIM join for this group. And/or even if it does not, it seems it will send IGMP when auto-rp listener is enabled. And it looks then like it keeps doing it periodically. So the top 2 routers on diagram ā€˜legitimatelyā€™ send IGMP reports for 224.0.1.40 and 224.0.1.39. Maybe they just do it regardless or maybe they responding to queries from neighboring router that was configured as auto-rp listener (?). And then there is still a question of DR - one of two routers on the segment should be DR, so the guess would be DR does not send IGMP report to its neighbor? Or what do the do? Send IGMP reports to themselves? Complicated.
Ok, the remote routers though - how they manage to send IGMP reports to the router in question? That smells of something to do with MPLS - like they consider access to the central router as one hop, even if in reality it goes over 2-3 other routers. All IPs that are sources of IGMP packets in debug (except two top routers) are IPs of the interfaces used for mbgp peering. So something like that. Hopefully, figuring it out will allow to put together a details schematics of all the multicast steps the routers are taking, IGMP and PIM and RPs.

Thanks