Multicast PIM Auto RP

Hello Sam

This is an interesting question that you bring up, and it is not readily clear as to why you see this behaviour. After spending some time with Rene labbing this up, the reason for this behaviour has been made clear, and we have all learned something in the process!

It is true that if you have R3 become the mapping agent as in your topology, it will send out its RP mapping advertisements to the 224.0.1.40 address, which means these advertisements should only reach R2 and R4 since we’re using sparse mode. However, you will find that if R2 is the DR for the link between R2 and R1, then it too will forward the mapping advertisement to R1. R2 will operate in dense mode for the 224.0.1.40 group. If it is not a DR for that link, then it will not forward it.

You can see this clearly in the following Cisco documentation:

Specifically it states:

All the PIM routers join the multicast group 224.0.1.40 when the first PIM-enabled interface comes up. This interface is seen in the outgoing interface list for this group if it is the Designated Router (DR) on that PIM Segment.

For your specific topology, R2 is indeed the DR for the R1-R2 segment since I assume it has the higher IP address, so it will forward the mapping messages. Conversely, R5 is probably the DR for the R4-R5 segment since R5 probably has the higher IP address, so R4 will not forward the advertisement. Try to change the DR priorities on the R4-R5 segment and see if that will cause the behaviour to change.

In any case, it’s not a good idea to simply depend on this behaviour of the forwarding of mapping advertisements by the DR. This is why two solutions have been provided to allow the RP to be learned by all multicast routers in the topology: PIM Sparse-Dense mode and PIM Auto-RP listener.

I hope this has been helpful!

Laz

1 Like

Hi Rene/Laz,

Once the MA has been configured, why does the mcast routing table on R2 only show GigabitEthernet0/1 as the outgoing interface for the (2.2.2.2, 224.0.1.40) entry (RP Mapping packets)? Should it not also have GigabitEthernet0/2?

Hello Bhawandeep

When looking at the multicast routing table, the outgoing interface list shows the interfaces for which either a PIM join or an IGMP membership request. It doesn’t signify the interfaces from which the RP mapping packets are being sent. RP mapping packets are sent out of all multicast-enabled interfaces.

Similarly, you will see that when R1 is configured to advertise itself as an RP, the STP entry for 224.0.1.39 actually has Null as the outgoing interface list. Again, this does not mean that it is not sending out the RP announcements. RP announcements are sent out of all multicast-enabled interfaces.

I hope this has been helpful!

Laz

Hi Laz. That makes sense. I was thinking the OIL determines the interfaces on which RP mapping packets are sent. Thank you for clarifying.

1 Like

The candidate RP and the mapping Agent could be on the same device? The election of the RP depends on what? What is the best practice to choose the RP? I guess that an RP should be a device that is located at a central point of the network but I’m not 100% sure.

Regards
Rodrigo

Hello Rodrigo

It is possible to configure a router to be both a mapping agent as well as a candidate RP. Simply configure both the ip pim send-rp-discovery command as well as the ip pim send-rp-announce command on the same device.

The mapping agent will select one RP per group. As stated in the lesson:

When two candidate RPs want to become RP for the same group(s) then the mapping agent will prefer the RP with the highest IP address.

In general, it doesn’t matter where the RP is. The RP is needed only to start new sessions, so the RP experiences little overhead from traffic flow or processing. However, the mapping agent should be placed centrally in order to facilitate RP elections, especially in a hub and spoke topology.

I hope this has been helpful!

Laz

2 Likes

hello Laz/Rene,
in a production multicast network, where you want to use Sparse-Mode and Auto-RP, would you configure ip pim autorp listener on all the multicast Routers, or just on the strictly needed ones, like in this lesson? I guess the first option, but I am not sure.
Thanks

Hello Giacomo

Ideally, you should apply the command only where it is needed. This way you can avoid any unnecessary dense mode flooding of the Auto-RP groups of 224.0.1.39 and 224.0.1.40. However, keep in mind that even if you do apply the command in your whole topology, it is only these two groups that are treated using the dense mode flooding. In most topologies, this would not cause an issue with overhead in bandwidth, CPU, and memory.

I hope this has been helpful!

Laz

I skimmed through the comments but maybe I missed this information. Can we have multiple mapping agents? If yes, do all of them get used at the same time? If No, how is the active mapping agent selected?

In the case of Bootstrap a similar role is performed by BSR router and in case of multiple BSR’s first the priority is compared and if it turns out to be the same then the candidate BSR with highest IP address is selected (correct me If I am wrong)

Hello Muhammad

Within a single PIM domain, you can have more than one router configured as a mapping agent, however only one router will be active at any time. The following command enables a router as a mapping agent:

ip pim send-rp-discovery loopback 0 scope 10

This specifies the loopback 0 as the source of the mapping agent, and a TTL of 10. The scope essentially determines the maximum distance in number of hops that this router will function as a mapping agent.

Now if there is more than one router configured as a mapping agent within a PIM domain, then each mapping agent makes this announcement independently of the decisions of the other mapping agents.

Concerning BSR, yes you are correct. As Rene states in his Multicast PIM Bootstrap (BSR) lesson:

There can be only one active BSR in the network. BSR routers will listen for messages from other BSR routers, the BSR router with the highest priority will become the active BSR. The highest BSR priority is 255 (the higher the better). When the priority is the same then the router with the highest IP address will become the BSR.

I hope this has been helpful!

Laz

I got general concepts but have more specific questions. It seems it can be said (for simplicity) that IGMP is used for receivers to tell first hop routers they need specific multicast; PIM is needed for routers to let source or RP know that they want to receive particular multicast that their clients requested; the multicast-routing needs to be enabled so routers can use routing tables to direct multicast traffic when they start receiving them. They questions are as follows:

  1. So IGMP traffic never passes beyond the first hop routers, there is no reason for that, right? It should be limited to local segment. Correct?
  2. Aside from running sparse-dense mode or enabling RP listener to allow RP calls to 224.0.1.39,40 to pass through it seems this traffic should pass also through the router without either of these two features if it is a DR. Since there is always IGMP present when multicast-routing is enabled it means there is always one router that is DR on any segment between two multicast enabled routers. So a long as RP is ‘east’ of the DR there is no need to do anything else, RP agent traffic will always pass through. And if it is not then manually increasing DR priority of the routers closest to RP will do the trick. Better or worse than rp listener it looks it should work. By default in about 50 percent of cases and with DR priority manipulation 100%. Is there fault in this view?
  3. In case interested router can’t get any info for RP it seems it just sit and does nothing. But what if it connected, running the multicast and then RP crashed - will it just sit and wait till RP announce itself again or it will try still to reach it? This question is because I’ve seen static RP configured on the router pointing to itself while it is using auto-rp. Seems the idea is that if auto-rp is gone the router will revert to static RP (which has no groups assigned to it) and will not try looking for RP. But will it be looking for it, indeed? If not, then such configuration seems just cluttering an interface.
  4. Finally, are there cases when router would start sending IGMP traffic to RP? Or anywhere? Thats kind of reiteration of the first question. The reason is thats what I saw on my network - one router which was supposed to be configured as RP, for one reason or another was not. It was also collecting some dropped packets on interface. Running debug showed to my surprise that dropped packets where IGMP packets on 224.0.1.40. From several remote routers, several hops away. I kind of dumbfounded now - I never expected that routers would send IGMP join for 224.0.1.40. The router in question is not RP and it is only connected to other routers. Seems I have critical flow in my understanding. Also, neither of routers involved are configured with ‘igmp join-group’. Maybe these packets were dropped by router because its not really configured for RP or even pim (though I would think they would be discarded by CPU, not dropped in interface) but why it would receive them in a first place, particularly from the routers that are far away?

Hello Vadim

Yes that is correct. IGMP operates between a multicast host and a multicast router on the local network segment. Initial IGMP packets are sent to the 224.0.0.1 all hosts multicast address to find the local multicast router. This multicast address is not routable, which means it can never be used to reach some device outside of the local subnet.

I’m not sure what you are referring to here. What is “this traffic” that you are referring to, the IGMP join message? If so, then yes, what you describe seems to be correct, but as you say, it isn’t considered best practice as it could be difficult to administrate.

Remember that the RP is there just to initiate the multicast traffic. Once that communication is established, the host may find a better route to the multicast source, bypassing the RP completely. So if the RP fails, it will not affect any of the already established multicast sessions. Take a look at the detailed explanation in the Multicast PIM Sparse Mode lesson that describes this process.

Hmm, that’s interesting. Can you share your packet capture with us so we can take a look as well? And let us know a little bit more about your topology? That way we can respond more responsibly and correctly.

I hope this has been helpful!

Laz

Thanks for explanation. For item ‘2’ - I meant the PIM Auto-RP registration traffic. Thats it looks like if router is DR on a segment between two routers then if its neighbor sends traffic to 224.0.1.40 it will pass through even if DR is not explicitly running in dense mode or has auto-rp listener enabled.
On a side note, the multicast topic seems more complicated then it looks like and it looks already rather complicated. Too many ‘if then’ conditions and ‘gotchas’. Which is a ‘point in case’ when discussing the IGMP issue with routers (I call it issue till I truly understand whats going on). The capture of the debug on the router that drops the IGMP packets from other routers is below:

debug mfib netio drop location 0/0/cpu0:
LC/0/0/CPU0:Nov 23 10:12:08.417 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.10, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:08.437 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.10, 224.0.1.39) ttl-1 prot-2 matched (*, 224.0.1.39/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.267 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.33.95, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.884 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:10.884 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.557 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.926 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.9, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.956 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.587 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.646 UTC: netio[276]:  1MFIB-egress : Pkt (192.168.29.204, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]

The simplified diagram for routers is as follows (all routers behind the MPLS cloud are several hops away, based on the traceroute sent from them):


Router in the center is the one where IGMP drops are observed, its ASR9000. Two routers on top are directly connected to it and are NX7000. The other routers sending IGMP are a mix of IOS and XE.
What it sounds like, based on additional googling, etc., is that it when router gets it PIM enabled it also gets IGMP enabled and when it does it sends IGMP report for group 221.0.1.40. It does not send PIM join for this group. And/or even if it does not, it seems it will send IGMP when auto-rp listener is enabled. And it looks then like it keeps doing it periodically. So the top 2 routers on diagram ‘legitimately’ send IGMP reports for 224.0.1.40 and 224.0.1.39. Maybe they just do it regardless or maybe they responding to queries from neighboring router that was configured as auto-rp listener (?). And then there is still a question of DR - one of two routers on the segment should be DR, so the guess would be DR does not send IGMP report to its neighbor? Or what do the do? Send IGMP reports to themselves? Complicated.
Ok, the remote routers though - how they manage to send IGMP reports to the router in question? That smells of something to do with MPLS - like they consider access to the central router as one hop, even if in reality it goes over 2-3 other routers. All IPs that are sources of IGMP packets in debug (except two top routers) are IPs of the interfaces used for mbgp peering. So something like that. Hopefully, figuring it out will allow to put together a details schematics of all the multicast steps the routers are taking, IGMP and PIM and RPs.

Thanks

Hello Vadim

The 224.0.1.40 address is used for messages from the RP mapping agent to discover RP candidates. This is a multicast address that is routable. There is no specialized configuration (DR, auto-rp listener, or dense mode/sparse mode) that needs to be configured on a router to have it forward such packets. The only thing that is necessary is to have multicast routing enabled. So it is normal behavior to have these packets forwarded by a multicast-routing-enabled router.

When you enable PIM on an interface, IGMP is also enabled. By default, Cisco routers that have multicast routing enabled send a PIM join for 224.0.1.40. This happens even if you only enable pim-sparse-mode and configure a static IP. You’ll still see this AutoRP traffic. That’s normal.

Now concerning the IGMP packet drops, does the router in the middle have PIM/IGMP enabled? or not at all. If so, that could cause the drops.

That’s difficult to say without knowing more about what the MPLS infrastructure looks like.

The truth is that understanding IGMP, PIM, DR, AutoRP, RP and so on can become complicated, and it needs a methodical step by step process that adds complexity slowly and in an organized manner. I suggest you go through the following lessons to get a better feel of how these features interact and affect each other:

In addition, doing Wireshark captures and examining the IGMP and PIM packets really help to understand what’s going on behind the scenes. Also, it’s easier to begin with a simple topology without the added complexity of MPLS…

I hope this has been helpful!

Laz

Thank you, Laz, that further clarifies the matter.
At the same time Im still confused about item ‘2’ - sure, 224.0.1.40 is routable as soon as mutlicast routing is enabled but it is also pruned in sparse mode. The way I understood your very lesson on AutoRP is auto-rp listener is needed exactly because this traffic will not be sent out from receiving it router (to the next router) unless it is requested (by report from other router) or PIM is configured to send it regardless (by switching to dense mode or using autorp listener). So you saying that nothing needs to be configured (aside from mrouting) for the router to find RP threw me off completely.
I sure read all the lessons mentioned but in the end had to revert to the lab. It is somewhat limited (somehow debug is not running on asr image) but nevertheless I collected some data and so far I was able to confirm the followed:

  1. Enabling multicast-routing on IOS does not do anything, except allowing processor to compare multicast traffic (if there is one) to the existing routing table. There is no any notion about RPs or such or registering any multicast groups, etc. And it seems stands to logic that in this case no autorp related traffic would pass this router (it would if its ever sent to it but it wont since without PIM and IGMP the router will never request it, at least while upstream routers are not in dense mode).
  2. Enabling PIM on IOS interfaces (not sure yet what global ‘router pim’ does) automatically enables IGMP (v3 by default which can be switched to IGMPv2 globally). That also enables auto-rp membership. Then IGMP report is sent on PIM interfaces, requesting autorp group traffic.
  3. In case of ASR OS, enabling multicast-routing automatically enables PIM, IGMP and Auto-rp membership on all pim interfaces. However, also contrary to IOS, nothing else happens - thats no IGMP report sent out for auto-rp group. It just sits and silently waits there.
  4. To be tested yet but I assume XE and NX behaves similarly to IOS.
  5. So seems conclusion here is (for sparse mode at least), that:
    a. If one has two IOS (or more) neighboring routers they will join and request autorp groups as soon as they are configured with mrouting AND PIM. If router’s neighbor is DR then nothing else is needed - it will try PIM join the RP and let its downstream neighbor know where the RP is. If, however, neighboring router is not DR (maybe the requesting router itself is) then while it will get IGMP report for autorp group it may not join the RP itself (that would be DR function) and so wont help the receiver router to receive RP traffic. So while its not always necessary it may be prudent to enable IOS routers (except receiver itself) with sparse-dense mode or autorp listener.
    b. If ASR router is a receiver or ‘in-path’ router for some other router, then enabling PIM on it would not cause requesting the RP announcement. If it is adjacent to RP it will receive IGMP requests from RP and also RP announcements but it will not pass them downstream unless there is IOS router downstream (who would request this traffic) and it is a DR for downstream segment. And if it is at least one hop away from RP it will never get the RP announcements since it never asks for them. Which is why it has to have autorp listener configured (or sparse-dense mode).
    c. So enabling ‘autorp listener’ is almost a must on all ASRs and a very good idea on all IOS. When it is enabled the ASR will first send IGMP reports on autorp groups to PIM neighbors and then will sends autorp groups traffic out of all PIM interfaces. When enabled on IOS it only adds sending the autorp groups traffic out of PIM interfaces, regardless if it was requested. (autorp groups meant here are only 224.0.1.39/40, not any of the mapped to RP groups).

Thats my understanding based on the lessons and what I observed in the lab. So Im not sure how to understand your statement that nothing needs to be configured on the routers except mrouting for RP to be known.
As to the puzzle how IGMP requests are getting to the my router over some distance it is still a puzzle and it seems I will have to analyze every step on the way between these routers. It will take some time but seems you dont find it ‘normal’ either, so there is something special that is allowing IGMP to flow between these routers. I feel it got to be related to mpls.

Hello Vadim…

Yes, you are correct, let me clarify. As stated in the Multicast PIM Auto RP lesson, the problem with the 224.0.1.40 address is a “chicken and egg” scenario. How can they learn about the RP address if they need the RP address to become members of the 224.0.1.40 group? So yes, the solution is either Auto-RP listener or sparse-dense mode.

  • Remember that sparse-dense mode allows our multicast routers to use dense mode for multicast groups that don’t have an RP configured, so 224.0.1.40 will be flooded.
  • Also remember that using AutoRP listener, the router will use dense mode only for the 224.0.1.39 and 224.0.1.40 addresses.

So if you are seeing packets destination to 224.0.1.40 that are coming from another subnet, then one of these must have been configured.

I hope this has been helpful!

Laz

Hello Bhawandeep

Actually, the link-local multicast scope is 224.0.0.0/24. This means it ranges from 224.0.0.0 to 224.0.0.255 as shown in this Cisco documentation:

The RP announcement multicast IP of 224.0.1.39 and the RP mapping packets sent to 224.0.1.40 are actually found in what is known as the Internetwork control range, specifically 224.0.1.0/24. So the addresses used by these services are indeed routable. Addresses in this range are used for protocol control traffic including things like NTP and others. More about the Internetwork Control range can be found at RFC 5771:

https://www.rfc-editor.org/rfc/rfc5771.html#page-5

This means that the scope which defines the TTL does have meaning in this case, since these addresses can be routed through multiple routers to get to their destinations.

I hope this has been helpful!

Laz

Thanks Laz. Forgot they are from a different reserved range!

1 Like

My understanding is that if there are more than one Mapping Agent on the network, the effect on the source or recipient routers would be cumulative - thats mapping table, groups to RPs, will be added on the source and recipient routers so they will have a map of different groups on different RPs. (Though seems in some cases, like NX OS, it may not be correct).

  1. But what would happen if somebody add new Mapping agent (and RP) on the network (by mistake or maliciously) pointing to RP ‘2’ that will advertise the same group as coming on RP ‘1’ ? So client or source router now has two records for the same group - one saying the RP’1’ is authoritative for it and the other that its RP’2’. What RP the source router will register with and which RP the client router would join? I see a potential problem here, like DOS attack.
  2. In a worse case, if only one Mapping agent will be listened to (again, seems at least NX is like that), what are the ways that would help mitigate potential issue when a rogue Mapping agent and RP are added to the network?

Hello Vadim

Yes, that is correct. The process of choosing the RP is something that is done independently for each group. Also, each individual mapping agent independently and individually makes these decisions regardless of what other mapping agents do. As stated in this Cisco documentation:

The mapping agent receives announcements of intention to become the RP from Candidate-RPs. The mapping agent then announces the winner of the RP election. This announcement is made independently of the decisions by the other mapping agents.

In addition, when there are multiple mapping agents, conflicts are resolved following these rules:

  • If there are two announcements with the same group range but different RPs, the mapping agent will select the announcement with the highest RP IP address.
  • If there are two announcements where one group is a subset of another but the RPs are different, both will be sent.
  • All other announcements are grouped together without any conflict resolution.

Now mapping agents send out their conclusions based on these rules to the 224.0.1.40 multicast group, to which all regular routers join. Based on this content, each router is responsible for populating its own Auto-RP cache with the Group to RP mappings. The cache contains both negative and positive entries.

When looking for an RP for a particular group, the Auto-RP algorithm will look through the negative entries. If there is a match to a negative entry, no RP is used, and the group is considered to operate in dense mode. Any RP information in the negative entries will be ignored since RPs are not used in dense mode.

If the group does not match a negative entry, the algorithm will begin to search in the positive list.
Now in this list, because each group corresponds to a particulate RP, there may be conflicts where multiple RPs try to serve overlapping group ranges. The receiving router uses the longest match rule to resolve all conflicts. If there are multiple matches, only the one with the longest prefix length is selected.

So you see, any conflicts, even due to malicious misinformation given by a rogue mapping agent, are ultimately resolved within each multicast router with very specific criteria.

I hope this has been helpful!

Laz