Re: Multicast over NBMA makes my brain hurt

From: MSFC-EO60 <"Baldwin,>
Date: Mon, 12 Nov 2012 06:20:02 -0600

You need ip pim sparse on l0 of r1

Sent from my iPad

On Nov 12, 2012, at 5:31 AM, "Keller Giacomarro" <keller.g_at_gmail.com> wrote:

> For your full review, if you like...
> R1: http://pastebin.com/raw.php?i=cHFzBDEh
> R2: http://pastebin.com/raw.php?i=Dn6ATSPM
> R3: http://pastebin.com/raw.php?i=yEwnnc8y
> GNS3: http://pastebin.com/raw.php?i=A4MnALTq
>
> Configs and GNS3 are written for IOS 15.1. I have also tried this in 12.4
> with identical results.
>
> Note that I did this in GNS3 after I experienced the problem with my real
> equipment, so I do not believe this is a GNS3 problem.
>
> Thoughts appreciated!
>
> Keller Giacomarro
> keller.g_at_gmail.com
>
>
> On Mon, Nov 12, 2012 at 4:47 AM, Keller Giacomarro <keller.g_at_gmail.com>wrote:
>
>> Yep, set to dr-priority 1000 in my configs.
>>
>> R1#show ip pim int
>>
>> Address Interface Ver/ Nbr Query DR DR
>> Mode Count Intvl Prior
>> 1.1.1.1 Loopback0 v2/S 0 30 1
>> 1.1.1.1
>> 10.0.0.1 Serial1/0 v2/S 0 30 1000
>> 10.0.0.1
>> R1#
>>
>>
>> Keller Giacomarro
>> keller.g_at_gmail.com
>>
>>
>>
>> On Mon, Nov 12, 2012 at 4:38 AM, oo IPX <oispxl_at_gmail.com> wrote:
>>
>>> have you tried making the R1 DR,
>>>
>>> R3(config)#do ping 224.1.1.1 rep 100
>>>
>>> Type escape sequence to abort.
>>> Sending 100, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
>>> ......
>>> Reply to request 6 from 10.1.123.2, 84 ms
>>> Reply to request 7 from 10.1.123.2, 104 ms
>>> Reply to request 8 from 10.1.123.2, 76 ms
>>> Reply to request 9 from 10.1.123.2, 56 ms
>>> Reply to request 10 from 10.1.123.2, 76 ms
>>> Reply to request 11 from 10.1.123.2, 56
>>>
>>>
>>>
>>> those loss of pings is because i cleared the mroute table.
>>>
>>>
>>> R3(config)#do sh ip pim inter s0/0 de | I DR
>>> PIM DR: 10.1.123.1
>>> R3(config)#
>>>
>>> R2(config)#do sh ip pim inter s0/0 de | I DR
>>> PIM DR: 10.1.123.1
>>> R2(config)#
>>>
>>> R1(config-if)#do sh run inter s0/0
>>> Building configuration...
>>>
>>> Current configuration : 374 bytes
>>> !
>>> interface Serial0/0
>>> ip address 10.1.123.1 255.255.255.0
>>> ip pim dr-priority 90
>>> ip pim nbma-mode
>>> ip pim sparse-mode
>>> encapsulation frame-relay
>>> no ip split-horizon eigrp 1
>>> no ip mroute-cache
>>> clock rate 2000000
>>> frame-relay map ip 10.1.123.1 111
>>> frame-relay map ip 10.1.123.2 111 broadcast
>>> frame-relay map ip 10.1.123.3 101 broadcast
>>> no frame-relay inverse-arp
>>> end
>>>
>>>
>>> On Mon, Nov 12, 2012 at 2:32 PM, Keller Giacomarro <keller.g_at_gmail.com>wrote:
>>>
>>>> Okay, I must be totally missing the boat here, but I can't get Multicast
>>>> over NBMA to work AT ALL.
>>>>
>>>> R2-----\
>>>> -------- R1
>>>> R3-----/
>>>>
>>>> All interfaces are physical interfaces with static ipv4 mappings. R1 has
>>>> DLCIs to both spoke routers, and spoke routers only have DLCIs to R1.
>>>> This
>>>> is as simple as I know how to get it.
>>>>
>>>> *** R1 ***
>>>> interface Serial1/0
>>>> ip address 10.0.0.1 255.255.255.0
>>>> ip pim dr-priority 1000
>>>> ip pim nbma-mode
>>>> ip pim sparse-mode
>>>> encapsulation frame-relay
>>>> frame-relay map ip 10.0.0.3 103 broadcast
>>>> frame-relay map ip 10.0.0.2 102 broadcast
>>>> no frame-relay inverse-arp
>>>> !
>>>> interface Loopback0
>>>> ip address 1.1.1.1 255.255.255.0
>>>> !
>>>> ip pim rp-address 1.1.1.1
>>>>
>>>> *** R2 ***
>>>> interface Serial1/0
>>>> ip address 10.0.0.2 255.255.255.0
>>>> ip pim sparse-mode
>>>> encapsulation frame-relay
>>>> frame-relay map ip 10.0.0.3 201
>>>> frame-relay map ip 10.0.0.1 201 broadcast
>>>> !
>>>> interface Loopback0
>>>> ip address 2.2.2.2 255.255.255.255
>>>> ip pim sparse-mode
>>>> ip igmp join-group 229.0.0.2
>>>> !
>>>> ip route 1.1.1.1 255.255.255.255 10.0.0.1
>>>> ip pim rp-address 1.1.1.1
>>>>
>>>> *** R3 ***
>>>> interface Serial1/0
>>>> ip address 10.0.0.3 255.255.255.0
>>>> ip pim sparse-mode
>>>> encapsulation frame-relay
>>>> frame-relay map ip 10.0.0.2 301
>>>> frame-relay map ip 10.0.0.1 301 broadcast
>>>> !
>>>> ip route 1.1.1.1 255.255.255.255 10.0.0.1
>>>> ip pim rp-address 1.1.1.1
>>>>
>>>> *** Testing ***
>>>> Ping is from R3 to 229.0.0.2, which is joined on R2. The first ping goes
>>>> through fine, all others drop until the mroute times out on R1.
>>>>
>>>> ---
>>>> R3(config)#do ping 229.0.0.2 re 10
>>>> Type escape sequence to abort.
>>>> Sending 10, 100-byte ICMP Echos to 229.0.0.2, timeout is 2 seconds:
>>>>
>>>> Reply to request 0 from 2.2.2.2, 48 ms.........
>>>> R3(config)#
>>>> ---
>>>>
>>>> Debugs indicate that R2 (subscriber router) is sending a PIM Prune to R1
>>>> (the hub/RP) as soon as the first packet is received. R2 retains the
>>>> (S,G)
>>>> mapping with an incoming interface of s1/0, but the prune message causes
>>>> R1
>>>> to remove S1/0 from the OIL. Any packets after the first are dropped on
>>>> R1
>>>> due to the olist being null.
>>>>
>>>> I don't understand why the PIM Prune is being generated on R2 for R1 --
>>>> isn't that the router that's sending the stream? Most of all, I don't
>>>> understand why something that seems so simple isn't working!
>>>>
>>>> In conclusion, I hate multicast!
>>>>
>>>> Appreciate any help you might be able to provide. =)
>>>>
>>>> Keller Giacomarro
>>>> keller.g_at_gmail.com
>>>>
>>>>
>>>> Blogs and organic groups at http://www.ccie.net
>>>>
>>>> _______________________________________________________________________
>>>> Subscription information may be found at:
>>>> http://www.groupstudy.com/list/CCIELab.html
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html

Blogs and organic groups at http://www.ccie.net
Received on Mon Nov 12 2012 - 06:20:02 ART

This archive was generated by hypermail 2.2.0 : Sat Dec 01 2012 - 07:27:50 ART