Multicast PIM Auto RP

Hello Vadim

The 224.0.1.40 address is used for messages from the RP mapping agent to discover RP candidates. This is a multicast address that is routable. There is no specialized configuration (DR, auto-rp listener, or dense mode/sparse mode) that needs to be configured on a router to have it forward such packets. The only thing that is necessary is to have multicast routing enabled. So it is normal behavior to have these packets forwarded by a multicast-routing-enabled router.

When you enable PIM on an interface, IGMP is also enabled. By default, Cisco routers that have multicast routing enabled send a PIM join for 224.0.1.40. This happens even if you only enable pim-sparse-mode and configure a static IP. You’ll still see this AutoRP traffic. That’s normal.

Now concerning the IGMP packet drops, does the router in the middle have PIM/IGMP enabled? or not at all. If so, that could cause the drops.

That’s difficult to say without knowing more about what the MPLS infrastructure looks like.

The truth is that understanding IGMP, PIM, DR, AutoRP, RP and so on can become complicated, and it needs a methodical step by step process that adds complexity slowly and in an organized manner. I suggest you go through the following lessons to get a better feel of how these features interact and affect each other:

In addition, doing Wireshark captures and examining the IGMP and PIM packets really help to understand what’s going on behind the scenes. Also, it’s easier to begin with a simple topology without the added complexity of MPLS…

I hope this has been helpful!

Laz

Thank you, Laz, that further clarifies the matter.
At the same time Im still confused about item ‘2’ - sure, 224.0.1.40 is routable as soon as mutlicast routing is enabled but it is also pruned in sparse mode. The way I understood your very lesson on AutoRP is auto-rp listener is needed exactly because this traffic will not be sent out from receiving it router (to the next router) unless it is requested (by report from other router) or PIM is configured to send it regardless (by switching to dense mode or using autorp listener). So you saying that nothing needs to be configured (aside from mrouting) for the router to find RP threw me off completely.
I sure read all the lessons mentioned but in the end had to revert to the lab. It is somewhat limited (somehow debug is not running on asr image) but nevertheless I collected some data and so far I was able to confirm the followed:

  1. Enabling multicast-routing on IOS does not do anything, except allowing processor to compare multicast traffic (if there is one) to the existing routing table. There is no any notion about RPs or such or registering any multicast groups, etc. And it seems stands to logic that in this case no autorp related traffic would pass this router (it would if its ever sent to it but it wont since without PIM and IGMP the router will never request it, at least while upstream routers are not in dense mode).
  2. Enabling PIM on IOS interfaces (not sure yet what global ‘router pim’ does) automatically enables IGMP (v3 by default which can be switched to IGMPv2 globally). That also enables auto-rp membership. Then IGMP report is sent on PIM interfaces, requesting autorp group traffic.
  3. In case of ASR OS, enabling multicast-routing automatically enables PIM, IGMP and Auto-rp membership on all pim interfaces. However, also contrary to IOS, nothing else happens - thats no IGMP report sent out for auto-rp group. It just sits and silently waits there.
  4. To be tested yet but I assume XE and NX behaves similarly to IOS.
  5. So seems conclusion here is (for sparse mode at least), that:
    a. If one has two IOS (or more) neighboring routers they will join and request autorp groups as soon as they are configured with mrouting AND PIM. If router’s neighbor is DR then nothing else is needed - it will try PIM join the RP and let its downstream neighbor know where the RP is. If, however, neighboring router is not DR (maybe the requesting router itself is) then while it will get IGMP report for autorp group it may not join the RP itself (that would be DR function) and so wont help the receiver router to receive RP traffic. So while its not always necessary it may be prudent to enable IOS routers (except receiver itself) with sparse-dense mode or autorp listener.
    b. If ASR router is a receiver or ‘in-path’ router for some other router, then enabling PIM on it would not cause requesting the RP announcement. If it is adjacent to RP it will receive IGMP requests from RP and also RP announcements but it will not pass them downstream unless there is IOS router downstream (who would request this traffic) and it is a DR for downstream segment. And if it is at least one hop away from RP it will never get the RP announcements since it never asks for them. Which is why it has to have autorp listener configured (or sparse-dense mode).
    c. So enabling ‘autorp listener’ is almost a must on all ASRs and a very good idea on all IOS. When it is enabled the ASR will first send IGMP reports on autorp groups to PIM neighbors and then will sends autorp groups traffic out of all PIM interfaces. When enabled on IOS it only adds sending the autorp groups traffic out of PIM interfaces, regardless if it was requested. (autorp groups meant here are only 224.0.1.39/40, not any of the mapped to RP groups).

Thats my understanding based on the lessons and what I observed in the lab. So Im not sure how to understand your statement that nothing needs to be configured on the routers except mrouting for RP to be known.
As to the puzzle how IGMP requests are getting to the my router over some distance it is still a puzzle and it seems I will have to analyze every step on the way between these routers. It will take some time but seems you dont find it ‘normal’ either, so there is something special that is allowing IGMP to flow between these routers. I feel it got to be related to mpls.

Hello Vadim…

Yes, you are correct, let me clarify. As stated in the Multicast PIM Auto RP lesson, the problem with the 224.0.1.40 address is a “chicken and egg” scenario. How can they learn about the RP address if they need the RP address to become members of the 224.0.1.40 group? So yes, the solution is either Auto-RP listener or sparse-dense mode.

  • Remember that sparse-dense mode allows our multicast routers to use dense mode for multicast groups that don’t have an RP configured, so 224.0.1.40 will be flooded.
  • Also remember that using AutoRP listener, the router will use dense mode only for the 224.0.1.39 and 224.0.1.40 addresses.

So if you are seeing packets destination to 224.0.1.40 that are coming from another subnet, then one of these must have been configured.

I hope this has been helpful!

Laz

Hello Bhawandeep

Actually, the link-local multicast scope is 224.0.0.0/24. This means it ranges from 224.0.0.0 to 224.0.0.255 as shown in this Cisco documentation:

The RP announcement multicast IP of 224.0.1.39 and the RP mapping packets sent to 224.0.1.40 are actually found in what is known as the Internetwork control range, specifically 224.0.1.0/24. So the addresses used by these services are indeed routable. Addresses in this range are used for protocol control traffic including things like NTP and others. More about the Internetwork Control range can be found at RFC 5771:

https://www.rfc-editor.org/rfc/rfc5771.html#page-5

This means that the scope which defines the TTL does have meaning in this case, since these addresses can be routed through multiple routers to get to their destinations.

I hope this has been helpful!

Laz

Thanks Laz. Forgot they are from a different reserved range!

1 Like

My understanding is that if there are more than one Mapping Agent on the network, the effect on the source or recipient routers would be cumulative - thats mapping table, groups to RPs, will be added on the source and recipient routers so they will have a map of different groups on different RPs. (Though seems in some cases, like NX OS, it may not be correct).

  1. But what would happen if somebody add new Mapping agent (and RP) on the network (by mistake or maliciously) pointing to RP ‘2’ that will advertise the same group as coming on RP ‘1’ ? So client or source router now has two records for the same group - one saying the RP’1’ is authoritative for it and the other that its RP’2’. What RP the source router will register with and which RP the client router would join? I see a potential problem here, like DOS attack.
  2. In a worse case, if only one Mapping agent will be listened to (again, seems at least NX is like that), what are the ways that would help mitigate potential issue when a rogue Mapping agent and RP are added to the network?

Hello Vadim

Yes, that is correct. The process of choosing the RP is something that is done independently for each group. Also, each individual mapping agent independently and individually makes these decisions regardless of what other mapping agents do. As stated in this Cisco documentation:

The mapping agent receives announcements of intention to become the RP from Candidate-RPs. The mapping agent then announces the winner of the RP election. This announcement is made independently of the decisions by the other mapping agents.

In addition, when there are multiple mapping agents, conflicts are resolved following these rules:

  • If there are two announcements with the same group range but different RPs, the mapping agent will select the announcement with the highest RP IP address.
  • If there are two announcements where one group is a subset of another but the RPs are different, both will be sent.
  • All other announcements are grouped together without any conflict resolution.

Now mapping agents send out their conclusions based on these rules to the 224.0.1.40 multicast group, to which all regular routers join. Based on this content, each router is responsible for populating its own Auto-RP cache with the Group to RP mappings. The cache contains both negative and positive entries.

When looking for an RP for a particular group, the Auto-RP algorithm will look through the negative entries. If there is a match to a negative entry, no RP is used, and the group is considered to operate in dense mode. Any RP information in the negative entries will be ignored since RPs are not used in dense mode.

If the group does not match a negative entry, the algorithm will begin to search in the positive list.
Now in this list, because each group corresponds to a particulate RP, there may be conflicts where multiple RPs try to serve overlapping group ranges. The receiving router uses the longest match rule to resolve all conflicts. If there are multiple matches, only the one with the longest prefix length is selected.

So you see, any conflicts, even due to malicious misinformation given by a rogue mapping agent, are ultimately resolved within each multicast router with very specific criteria.

I hope this has been helpful!

Laz

Thank you, that helps clarify the issue. It seems that there is a potential for somebody to configure RP on the network advertising some overlapping groups and give it highest IP of all the RPs. But that should not be a problem - so new RP will be an RP now, not a big deal since the traffic to RP is very small (clients switch to source trees as soon as got first packet from RP) and so location of RP is not hugely important. One way or the other the traffic will be established between source and recipient.

Hello Vadim

Yes, that is correct, that is a very clear and understandable description. I’m glad that was helpful!

Laz

Hi team,

First thank your for the quality of these lessons. I’ve been reading Cisco Press book so far and i’m learning a lot with you.

I have a question on R3’s routing table. Why do we have:

(*, 224.0.1.39), 00:05:28/00:01:57, RP 0.0.0.0, flags: DC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet0/2, Forward/Sparse, 00:05:28/00:01:57

This only is not created only when RP announcement is received?

Hello David

That’s a great observation. Indeed it seems that R3 should not have an mroute to 224.0.1.39 since mapping agents use this address to listen for candidate announcements. It makes sense for R2 to have it, but what about R3?

Well, it turns out that the mapping agent (R2) will send out IGMP membership reports for the 224.0.1.39 group as soon as the send-rp-discovery command is applied.

This is confirmed with the following command on R3:

R3#show ip igmp membership 224.0.1.39

 Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface 
 *,224.0.1.39                   192.168.23.2    00:01:45 02:08 2A     Gi0/2

If you enable debugs you will see the relevant membership report as well. This is why the group appears in the mroute table of R3, not because R3 received an RP announcement, but because R2 sent an IGMP membership report.

I hope this has been helpful!

Laz