Hello Michael
A response of (*, 239.0.78.141), 1w3s/00:02:54, RP 0.0.0.0
tells us that an RP has not been defined. Either enable auto RP or manually define the RP in order to have that information added.
I hope this has been helpful!
Laz
Hello Michael
A response of (*, 239.0.78.141), 1w3s/00:02:54, RP 0.0.0.0
tells us that an RP has not been defined. Either enable auto RP or manually define the RP in order to have that information added.
I hope this has been helpful!
Laz
Thanks Laz that makes sense, many thanks.
A post was merged into an existing topic: VRF Lite Configuration on Cisco IOS
Dear Rene,
In the configuration example, R3 is getting the rp mappings before we configure the ip pim autorp listener command. That means R3 know the RP address. So R3 should be able to serve if R4 wants to join a feed which is coming from R1 (as RP). But it is not until we configure ip pim autorp listener command on R3. Why is that?
Hello Roshan
If R4 is a host that wants to participate in a multicast group, then yes you are right, it can receive all the necessary information for such a case. However, we donāt want R4 to function as a multicast participant, but as a multicast router that will allow hosts on networks connected to it to participate in multicast. In this case, R4 will not āknowā anything because it is behind R3 and is not receiving any RP mapping packets. This is because we are using PIM sparse mode which means that it would only be forwarded when requested by a downstream router.
The solution is to use the ip pim autorp listener
which ensures that traffic sent to 224.0.1.39 gets flooded with dense mode. Once enabled, packets will be flooded to R4.
I hope this has been helpful!
Laz
Thank you Laz,
I think I have come across a bug or something. Or may be I had to clear the mroute chache at that time. It worked after some how anyway. Tx
Hi Rene and team,
I tried to replicate a similar lab to the one in this lesson, but Iām getting some unexpected results.
Hereās the topology:
All RTR# devices have their interfaces configured with ip pim sparse-mode
, and all RTR# devices have ip multicast-routing
enabled. Also, OSPF is enabled on all RTR# interfaces in area 0. RTR# devices are OSPF neighbors on their directly connected interfaces.
RTR3ās loopback0 interface (3.3.3.3) is configured to be the RP: RTR3 is also the configured mapping agent and an RP candidate:
interface Loopback0
ip address 3.3.3.3 255.255.255.255
ip pim sparse-mode
!
ip pim rp-address 3.3.3.3
ip pim send-rp-announce Loopback0 scope 16
ip pim send-rp-discovery Loopback0 scope 16
I did a show ip pim rp mapping
on RTR1 and expected to see it empty (because all interfaces are configured to be in sparse mode). RTR2 is getting the RP discovery messages sent to 224.0.1.40 from RTR3 since itās directly connected to RTR3, but RTR1 is some how still learning the RP to be 3.3.3.3 from R2. I did a debug ip pim auto-rp
on RTR1 to confirm this.
RTR1#sh ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 3.3.3.3 (?), v2v1
Info source: 3.3.3.3 (?), elected via Auto-RP
Uptime: 00:17:49, expires: 00:01:58
RTR1#
RTR1#
*Feb 14 16:03:41.565: Auto-RP(0): Received RP-discovery packet of length 48, from 3.3.3.3, RP_cnt 1, ht 181
*Feb 14 16:03:41.565: (0): pim_add_prm:: 224.0.0.0/240.0.0.0, rp=3.3.3.3, repl = 1, ver =3, is_neg =0, bidir = 0, crp = 0
*Feb 14 16:03:41.565: Auto-RP(0): Update
*Feb 14 16:03:41.565: prm_rp->bidir_mode = 0 vs bidir = 0 (224.0.0.0/4, RP:3.3.3.3), PIMv2 v1
RTR1#
RTR2ās show ip mroute
shows that it is sending the auto-RP discovery messages over an SPT tree to RTR1 via its Ethernet0/1 interface. Iām not sure why that is, though.
RTR2#show ip mroute
<output omitted>
(3.3.3.3, 224.0.1.40), 00:21:35/00:02:12, flags: LT
Incoming interface: Ethernet0/2, RPF nbr 192.168.2.2
Outgoing interface list:
**Ethernet0/1**, Forward/Sparse, 00:21:35/00:02:31
Meanwhile at the other end of the topology, RTR4 does not appear to be forwarding the RP-discovery messages it receives on Et0/3 over to RTR5 on its Et0/0 interface:
RTR4#show ip mroute
<output omitted>
(3.3.3.3, 224.0.1.40), 00:49:08/00:02:25, flags: PLT
Incoming interface: Ethernet0/3, RPF nbr 192.168.3.1
**Outgoing interface list: Null**
RTR5 doesnāt seem to know about the RP:
RTR5#sh ip pim rp mapping
PIM Group-to-RP Mappings
RTR5#
Hello Sam
This is an interesting question that you bring up, and it is not readily clear as to why you see this behaviour. After spending some time with Rene labbing this up, the reason for this behaviour has been made clear, and we have all learned something in the process!
It is true that if you have R3 become the mapping agent as in your topology, it will send out its RP mapping advertisements to the 224.0.1.40 address, which means these advertisements should only reach R2 and R4 since weāre using sparse mode. However, you will find that if R2 is the DR for the link between R2 and R1, then it too will forward the mapping advertisement to R1. R2 will operate in dense mode for the 224.0.1.40 group. If it is not a DR for that link, then it will not forward it.
You can see this clearly in the following Cisco documentation:
Specifically it states:
All the PIM routers join the multicast group 224.0.1.40 when the first PIM-enabled interface comes up. This interface is seen in the outgoing interface list for this group if it is the Designated Router (DR) on that PIM Segment.
For your specific topology, R2 is indeed the DR for the R1-R2 segment since I assume it has the higher IP address, so it will forward the mapping messages. Conversely, R5 is probably the DR for the R4-R5 segment since R5 probably has the higher IP address, so R4 will not forward the advertisement. Try to change the DR priorities on the R4-R5 segment and see if that will cause the behaviour to change.
In any case, itās not a good idea to simply depend on this behaviour of the forwarding of mapping advertisements by the DR. This is why two solutions have been provided to allow the RP to be learned by all multicast routers in the topology: PIM Sparse-Dense mode and PIM Auto-RP listener.
I hope this has been helpful!
Laz
Hi Rene/Laz,
Once the MA has been configured, why does the mcast routing table on R2 only show GigabitEthernet0/1 as the outgoing interface for the (2.2.2.2, 224.0.1.40) entry (RP Mapping packets)? Should it not also have GigabitEthernet0/2?
Hello Bhawandeep
When looking at the multicast routing table, the outgoing interface list shows the interfaces for which either a PIM join or an IGMP membership request. It doesnāt signify the interfaces from which the RP mapping packets are being sent. RP mapping packets are sent out of all multicast-enabled interfaces.
Similarly, you will see that when R1 is configured to advertise itself as an RP, the STP entry for 224.0.1.39 actually has Null as the outgoing interface list. Again, this does not mean that it is not sending out the RP announcements. RP announcements are sent out of all multicast-enabled interfaces.
I hope this has been helpful!
Laz
Hi Laz. That makes sense. I was thinking the OIL determines the interfaces on which RP mapping packets are sent. Thank you for clarifying.
The candidate RP and the mapping Agent could be on the same device? The election of the RP depends on what? What is the best practice to choose the RP? I guess that an RP should be a device that is located at a central point of the network but Iām not 100% sure.
Regards
Rodrigo
Hello Rodrigo
It is possible to configure a router to be both a mapping agent as well as a candidate RP. Simply configure both the ip pim send-rp-discovery
command as well as the ip pim send-rp-announce
command on the same device.
The mapping agent will select one RP per group. As stated in the lesson:
When two candidate RPs want to become RP for the same group(s) then the mapping agent will prefer the RP with the highest IP address.
In general, it doesnāt matter where the RP is. The RP is needed only to start new sessions, so the RP experiences little overhead from traffic flow or processing. However, the mapping agent should be placed centrally in order to facilitate RP elections, especially in a hub and spoke topology.
I hope this has been helpful!
Laz
hello Laz/Rene,
in a production multicast network, where you want to use Sparse-Mode and Auto-RP, would you configure ip pim autorp listener on all the multicast Routers, or just on the strictly needed ones, like in this lesson? I guess the first option, but I am not sure.
Thanks
Hello Giacomo
Ideally, you should apply the command only where it is needed. This way you can avoid any unnecessary dense mode flooding of the Auto-RP groups of 224.0.1.39 and 224.0.1.40. However, keep in mind that even if you do apply the command in your whole topology, it is only these two groups that are treated using the dense mode flooding. In most topologies, this would not cause an issue with overhead in bandwidth, CPU, and memory.
I hope this has been helpful!
Laz
I skimmed through the comments but maybe I missed this information. Can we have multiple mapping agents? If yes, do all of them get used at the same time? If No, how is the active mapping agent selected?
In the case of Bootstrap a similar role is performed by BSR router and in case of multiple BSRās first the priority is compared and if it turns out to be the same then the candidate BSR with highest IP address is selected (correct me If I am wrong)
Hello Muhammad
Within a single PIM domain, you can have more than one router configured as a mapping agent, however only one router will be active at any time. The following command enables a router as a mapping agent:
ip pim send-rp-discovery loopback 0 scope 10
This specifies the loopback 0 as the source of the mapping agent, and a TTL of 10. The scope essentially determines the maximum distance in number of hops that this router will function as a mapping agent.
Now if there is more than one router configured as a mapping agent within a PIM domain, then each mapping agent makes this announcement independently of the decisions of the other mapping agents.
Concerning BSR, yes you are correct. As Rene states in his Multicast PIM Bootstrap (BSR) lesson:
There can be only one active BSR in the network. BSR routers will listen for messages from other BSR routers, the BSR router with the highest priority will become the active BSR. The highest BSR priority is 255 (the higher the better). When the priority is the same then the router with the highest IP address will become the BSR.
I hope this has been helpful!
Laz
I got general concepts but have more specific questions. It seems it can be said (for simplicity) that IGMP is used for receivers to tell first hop routers they need specific multicast; PIM is needed for routers to let source or RP know that they want to receive particular multicast that their clients requested; the multicast-routing needs to be enabled so routers can use routing tables to direct multicast traffic when they start receiving them. They questions are as follows:
Hello Vadim
Yes that is correct. IGMP operates between a multicast host and a multicast router on the local network segment. Initial IGMP packets are sent to the 224.0.0.1 all hosts multicast address to find the local multicast router. This multicast address is not routable, which means it can never be used to reach some device outside of the local subnet.
Iām not sure what you are referring to here. What is āthis trafficā that you are referring to, the IGMP join message? If so, then yes, what you describe seems to be correct, but as you say, it isnāt considered best practice as it could be difficult to administrate.
Remember that the RP is there just to initiate the multicast traffic. Once that communication is established, the host may find a better route to the multicast source, bypassing the RP completely. So if the RP fails, it will not affect any of the already established multicast sessions. Take a look at the detailed explanation in the Multicast PIM Sparse Mode lesson that describes this process.
Hmm, thatās interesting. Can you share your packet capture with us so we can take a look as well? And let us know a little bit more about your topology? That way we can respond more responsibly and correctly.
I hope this has been helpful!
Laz
Thanks for explanation. For item ā2ā - I meant the PIM Auto-RP registration traffic. Thats it looks like if router is DR on a segment between two routers then if its neighbor sends traffic to 224.0.1.40 it will pass through even if DR is not explicitly running in dense mode or has auto-rp listener enabled.
On a side note, the multicast topic seems more complicated then it looks like and it looks already rather complicated. Too many āif thenā conditions and āgotchasā. Which is a āpoint in caseā when discussing the IGMP issue with routers (I call it issue till I truly understand whats going on). The capture of the debug on the router that drops the IGMP packets from other routers is below:
debug mfib netio drop location 0/0/cpu0:
LC/0/0/CPU0:Nov 23 10:12:08.417 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.10, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:08.437 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.10, 224.0.1.39) ttl-1 prot-2 matched (*, 224.0.1.39/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.267 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.33.95, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:09.884 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:10.884 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.557 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.926 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.9, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:11.956 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.34.138, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.587 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.43, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
LC/0/0/CPU0:Nov 23 10:12:12.646 UTC: netio[276]: 1MFIB-egress : Pkt (192.168.29.204, 224.0.1.40) ttl-1 prot-2 matched (*, 224.0.1.40/32) dropped [IGMP packet]
The simplified diagram for routers is as follows (all routers behind the MPLS cloud are several hops away, based on the traceroute sent from them):
Thanks