From: Dale Kling (dalek77@gmail.com)
Date: Thu Sep 04 2008 - 18:42:49 ART
Thanks Rado, I accidently deleted ip pim ssm default off the Ps when I
scrubbed the configs, but they were there intitially. I did the "sh ip pim
vrf Event_A neigh" and nothing comes up, there are no pim neighbors between
the PEs on that VRF. Interestingly enough, I sent multicast traffic on the
vrf and I noticed (S,G)s being built on the P routers with the multicast
group in the vrf, not the mdt one. Doesn't look like the MDT functionality
is working at all. If MDT is working, I should only see that mdt group on
the Ps right? I'll take a look at it again tomorrow.
thanks,
Dale
On Thu, Sep 4, 2008 at 5:24 PM, Rado Vasilev <decklandv@gmail.com> wrote:
> Hi,
>
> I don't see ``ip pim ssm default`` configured on P1 & P2...
> The rest looks fine. If it still doesn't work - use ``ip igmp join-group``
> on PE1 and PE2 using two groups and ping each of them from the remote PE in
> order to prove your SSM setup first (without caring about the VPN Multicast
> support for a moment).
>
> show ip pim vrf XXX neighbor is really helpful as you'll be able to see
> right away if you have PIM adjacencies between the PE routers (hence
> operational SSM in the SP core). over Tunnel interface (created dynamically
> with the MDT configuration).
>
> Regards,
> Rado
>
>
> Dale Kling wrote:
>
>> Hey Rado, Here are my configs from the devices. No output from the "sh ip
>> pim vrf Event_A neighbor" command. I have multicast generators on the
>> outside of the PEs to generate and receive multicast traffic. I had some CE
>> devices on there before, so you might see some erraneous network commands.
>> If I take out PIM SSM commands and configure static RP on any of the devices
>> for the provider PIM instance, it works fine. Just using SSM is a no go.
>> Mcast Generator
>> <---->(g4/0/2)PE1(g4/0/0)<---->(g9/2)P1(g9/1)<------>(g4/0/0)P2(g4/0/1)<------>(g4/0/1)PE2(g4/0/2)<---Mcast
>> Generator
>> Here comes the long paste:
>>
>> PE1
>> !
>> ip vrf Event_A
>> rd 25000:100
>> route-target export 25000:100
>> route-target import 25000:100
>> mdt default 232.1.1.1 <http://232.1.1.1>
>> !
>> ip multicast-routing
>> ip multicast-routing vrf Event_A
>>
>> !
>> interface Loopback0
>> ip address 150.1.2.2 <http://150.1.2.2> 255.255.255.255 <
>> http://255.255.255.255>
>> ip pim sparse-mode
>> !
>> interface GigabitEthernet4/0/0
>> description Connection to P1 (g4/0/0)
>> mtu 9216
>> ip address 10.0.200.2 <http://10.0.200.2> 255.255.255.248 <
>> http://255.255.255.248>
>> ip pim sparse-dense-mode
>> ip igmp version 3
>> no negotiation auto
>> mpls ip
>> !
>> interface GigabitEthernet4/0/2
>> description Multicast Generator
>> ip vrf forwarding Event_A
>> ip address 214.46.10.1 <http://214.46.10.1> 255.255.255.0 <
>> http://255.255.255.0>
>> ip pim sparse-mode
>> negotiation auto
>> !
>> router ospf 100
>> router-id 150.1.2.2 <http://150.1.2.2>
>> log-adjacency-changes
>> network 10.0.200.2 <http://10.0.200.2> 0.0.0.0 <http://0.0.0.0> area 0
>> network 150.1.2.2 <http://150.1.2.2> 0.0.0.0 <http://0.0.0.0> area 0
>> network 172.16.10.1 <http://172.16.10.1> 0.0.0.0 <http://0.0.0.0> area 0
>> network 192.168.100.1 <http://192.168.100.1> 0.0.0.0 <http://0.0.0.0>
>> area 0
>> !
>> router bgp 25000
>> no synchronization
>> bgp log-neighbor-changes
>> network 10.0.175.0 <http://10.0.175.0> mask 255.255.255.248 <
>> http://255.255.255.248>
>> network 150.1.2.0 <http://150.1.2.0> mask 255.255.255.0 <
>> http://255.255.255.0>
>> neighbor 10.0.175.2 <http://10.0.175.2> remote-as 65000
>> neighbor 10.0.200.1 <http://10.0.200.1> remote-as 25000
>> neighbor 10.0.200.1 <http://10.0.200.1> next-hop-self
>> neighbor 10.0.200.1 <http://10.0.200.1> send-community both
>> neighbor 150.1.5.5 <http://150.1.5.5> remote-as 25000
>> neighbor 150.1.5.5 <http://150.1.5.5> update-source Loopback0
>> no auto-summary
>> !
>> address-family vpnv4
>> neighbor 150.1.5.5 <http://150.1.5.5> activate
>> neighbor 150.1.5.5 <http://150.1.5.5> send-community both
>> exit-address-family
>> !
>> address-family ipv4 vrf Event_A
>> no synchronization
>> network 192.168.100.0 <http://192.168.100.0>
>> network 214.46.10.0 <http://214.46.10.0>
>> exit-address-family
>> !
>> !
>> ip pim ssm default
>> ip pim vrf Event_A rp-address 214.46.10.1 <http://214.46.10.1>
>>
>>
>> P1
>> !
>> ip multicast-routing
>> !
>> interface Loopback0
>> ip address 150.1.3.3 <http://150.1.3.3> 255.255.255.0 <
>> http://255.255.255.0>
>> ip pim sparse-mode
>> !
>> interface GigabitEthernet9/1
>> description Connection to P2 (g4/0/0)
>> mtu 9216
>> ip address 10.0.100.1 <http://10.0.100.1> 255.255.255.252 <
>> http://255.255.255.252>
>> ip pim sparse-dense-mode
>> ip igmp version 3
>> speed nonegotiate
>> tag-switching ip
>> !
>> interface GigabitEthernet9/2
>> description Connection to PE1 (g4/0/0)
>> mtu 9216
>> ip address 10.0.200.1 <http://10.0.200.1> 255.255.255.248 <
>> http://255.255.255.248>
>> ip pim sparse-mode
>> ip igmp version 3
>> no ip mroute-cache
>> speed nonegotiate
>> tag-switching ip
>> !
>> interface Vlan157
>> description Management VLAN
>> ip address 10.0.157.1 <http://10.0.157.1> 255.255.255.0 <
>> http://255.255.255.0>
>> !
>> router ospf 100
>> router-id 150.1.3.3 <http://150.1.3.3>
>> log-adjacency-changes
>> network 10.0.100.1 <http://10.0.100.1> 0.0.0.0 <http://0.0.0.0> area 0
>> network 10.0.200.1 <http://10.0.200.1> 0.0.0.0 <http://0.0.0.0> area 0
>> network 150.1.3.3 <http://150.1.3.3> 0.0.0.0 <http://0.0.0.0> area 0
>> !
>> router bgp 25000
>> no synchronization
>> bgp log-neighbor-changes
>> network 150.1.3.0 <http://150.1.3.0> mask 255.255.255.0 <
>> http://255.255.255.0>
>> neighbor 10.0.100.2 <http://10.0.100.2> remote-as 25000
>> neighbor 10.0.100.2 <http://10.0.100.2> route-reflector-client
>> neighbor 10.0.100.2 <http://10.0.100.2> send-community both
>> neighbor 10.0.200.2 <http://10.0.200.2> remote-as 25000
>> neighbor 10.0.200.2 <http://10.0.200.2> send-community both
>> no auto-summary
>> !
>>
>> P2
>> !
>> ip multicast-routing
>>
>> !
>> interface GigabitEthernet4/0/0
>> description Connection to P1 (g9/1)
>> mtu 9216
>> ip address 10.0.100.2 <http://10.0.100.2> 255.255.255.252 <
>> http://255.255.255.252>
>> ip pim sparse-dense-mode
>> ip igmp version 3
>> no negotiation auto
>> mpls ip
>> cdp enable
>> !
>> interface GigabitEthernet4/0/1
>> description Connection to PE2 (g4/0/1)
>> mtu 9216
>> ip address 10.0.200.9 <http://10.0.200.9> 255.255.255.248 <
>> http://255.255.255.248>
>> ip pim sparse-dense-mode
>> ip igmp version 3
>> no negotiation auto
>> mpls ip
>> cdp enable
>>
>> !
>> router ospf 100
>> router-id 150.1.4.4 <http://150.1.4.4>
>> log-adjacency-changes
>> network 10.0.100.2 <http://10.0.100.2> 0.0.0.0 <http://0.0.0.0> area 0
>> network 10.0.200.9 <http://10.0.200.9> 0.0.0.0 <http://0.0.0.0> area 0
>> network 150.1.4.4 <http://150.1.4.4> 0.0.0.0 <http://0.0.0.0> area 0
>> !
>> router bgp 25000
>> no synchronization
>> bgp log-neighbor-changes
>> network 150.1.4.0 <http://150.1.4.0> mask 255.255.255.0 <
>> http://255.255.255.0>
>> neighbor 10.0.100.1 <http://10.0.100.1> remote-as 25000
>> neighbor 10.0.100.1 <http://10.0.100.1> route-reflector-client
>> neighbor 10.0.100.1 <http://10.0.100.1> send-community both
>> neighbor 10.0.200.10 <http://10.0.200.10> remote-as 25000
>> neighbor 10.0.200.10 <http://10.0.200.10> send-community both
>> no auto-summary
>> !
>>
>>
>>
>> PE2
>>
>> !
>> ip vrf Event_A
>> rd 25000:100
>> route-target export 25000:100
>> route-target import 25000:100
>> mdt default 232.1.1.1 <http://232.1.1.1>
>>
>>
>> ip multicast-routing
>> ip multicast-routing vrf Event_A
>>
>> ! interface Loopback0
>> ip address 150.1.5.5 <http://150.1.5.5> 255.255.255.255 <
>> http://255.255.255.255>
>> ip pim sparse-mode
>>
>> !
>> interface GigabitEthernet4/0/1
>> description Connection to P2 (g4/0/1)
>> mtu 9216
>> ip address 10.0.200.10 <http://10.0.200.10> 255.255.255.248 <
>> http://255.255.255.248>
>> ip pim sparse-dense-mode
>> no negotiation auto
>> mpls ip
>> cdp enable
>> !
>> interface GigabitEthernet4/0/2
>> description Multicast Generator
>> ip vrf forwarding Event_A
>> ip address 214.46.20.1 <http://214.46.20.1> 255.255.255.0 <
>> http://255.255.255.0>
>> ip pim sparse-mode
>> negotiation auto
>> !
>> router ospf 100
>> router-id 150.1.5.5 <http://150.1.5.5>
>> log-adjacency-changes
>> network 10.0.200.10 <http://10.0.200.10> 0.0.0.0 <http://0.0.0.0> area 0
>> network 150.1.5.5 <http://150.1.5.5> 0.0.0.0 <http://0.0.0.0> area 0
>> network 172.16.20.1 <http://172.16.20.1> 0.0.0.0 <http://0.0.0.0> area 0
>> network 192.168.200.1 <http://192.168.200.1> 0.0.0.0 <http://0.0.0.0>
>> area 0
>> !
>> router bgp 25000
>> no synchronization
>> bgp log-neighbor-changes
>> network 10.0.175.8 <http://10.0.175.8> mask 255.255.255.248 <
>> http://255.255.255.248>
>> network 150.1.5.0 <http://150.1.5.0> mask 255.255.255.0 <
>> http://255.255.255.0>
>> neighbor 10.0.175.9 <http://10.0.175.9> remote-as 65001
>> neighbor 10.0.200.9 <http://10.0.200.9> remote-as 25000
>> neighbor 10.0.200.9 <http://10.0.200.9> next-hop-self
>> neighbor 10.0.200.9 <http://10.0.200.9> send-community both
>> neighbor 150.1.2.2 <http://150.1.2.2> remote-as 25000
>> neighbor 150.1.2.2 <http://150.1.2.2> update-source Loopback0
>> no auto-summary
>> !
>> address-family vpnv4
>> neighbor 150.1.2.2 <http://150.1.2.2> activate
>> neighbor 150.1.2.2 <http://150.1.2.2> send-community both
>> exit-address-family
>> !
>> address-family ipv4 vrf Event_A
>> no synchronization
>> network 214.46.20.0 <http://214.46.20.0>
>> exit-address-family
>> !
>> ip pim ssm default
>> ip pim vrf Event_A rp-address 214.46.10.1 <http://214.46.10.1>
>>
>> thanks,
>>
>> Dale
>>
>>
>> On Thu, Sep 4, 2008 at 10:41 AM, Rado Vasilev <decklandv@gmail.com<mailto:
>> decklandv@gmail.com>> wrote:
>>
>> Hi Dale,
>>
>> Could you send the VRF/MDT and multicast configurations on the PE
>> routers and the multicast configuration on the P?
>> Also it'd be nice to have the output of the ``show ip pim vrf XXX
>> neighbor`` from both PE routers.
>>
>> Rado
>>
>> Here we go again, why is SSM eluding me. I can't seem to be
>> getting SSM
>> working across my MPLS Core for building MDT, can someone give
>> me some ideas
>> to check.
>>
>> PE1(7600)<--->P1(6500)<--->P2(7600)<--->PE2(7600)
>>
>> I have "IP PIM SSM Default" configured on all four routers and
>> ip pim sparse
>> on all the interfaces, including the loopback on the PEs.
>>
>> I have a VRF configured on PE1 and PE2 that has unicast
>> reachability through
>> the VRF and configured with mdt default 232.1.1.1
>> <http://232.1.1.1>
>>
>> Here is my main mroute table on all the routers. It's like
>> the MDT defaults
>> aren't communicating across the core, so I can't get multicast
>> working
>> through the VRF.
>>
>> When I configure a static RP on P1 and point all the routers
>> to it,
>> everything works fine. Is there some nuance with SSM I'm
>> missing because I
>> don't see what I'm doing wrong compared to the examples I see
>> everywhere?
>>
>> PE1#
>> Sep 4 14:04:58.438: IP(0): MAC sa=9601.0202.0000 (Loopback0)
>> Sep 4 14:04:58.438: IP(0): IP tos=0xC0, len=52, id=24929,
>> ttl=255, prot=47
>> Sep 4 14:04:58.438: IP(0): s=150.1.2.2 <http://150.1.2.2>
>> (Loopback0) d=232.1.1.1 <http://232.1.1.1> id=24929,
>> ttl=255, prot=47, len=52(52), mroute olist null
>> PE1#sh ip mroute
>> IP Multicast Routing Table
>> Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
>> C - Connected,
>> L - Local, P - Pruned, R - RP-bit set, F - Register flag,
>> T - SPT-bit set, J - Join SPT, M - MSDP created entry,
>> X - Proxy Join Timer Running, A - Candidate for MSDP
>> Advertisement,
>> U - URD, I - Received Source Specific Host Report,
>> Z - Multicast Tunnel, z - MDT-data group sender,
>> Y - Joined MDT-data group, y - Sending to MDT-data group
>> V - RD & Vector, v - Vector
>> Outgoing interface flags: H - Hardware switched, A - Assert winner
>> Timers: Uptime/Expires
>> Interface state: Interface, Next-Hop or VCD, State/Mode
>>
>> (150.1.2.2 <http://150.1.2.2>, 232.1.1.1 <http://232.1.1.1>),
>> 00:31:42/00:02:54, flags: sPT
>> Incoming interface: Loopback0, RPF nbr 0.0.0.0
>> <http://0.0.0.0>, RPF-MFD
>> Outgoing interface list: Null
>>
>> (*, 224.0.1.40 <http://224.0.1.40>), 00:32:25/00:02:48, RP
>> 0.0.0.0 <http://0.0.0.0>, flags: DCL
>> Incoming interface: Null, RPF nbr 0.0.0.0 <http://0.0.0.0>
>> Outgoing interface list:
>> Loopback0, Forward/Sparse, 00:32:25/00:02:48
>>
>>
>> P1#sh ip mroute
>> IP Multicast Routing Table
>> Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
>> C - Connected,
>> L - Local, P - Pruned, R - RP-bit set, F - Register flag,
>> T - SPT-bit set, J - Join SPT, M - MSDP created entry,
>> X - Proxy Join Timer Running, A - Candidate for MSDP
>> Advertisement,
>> U - URD, I - Received Source Specific Host Report, Z -
>> Multicast
>> Tunnel
>> Y - Joined MDT-data group, y - Sending to MDT-data group
>> Outgoing interface flags: H - Hardware switched, A - Assert winner
>> Timers: Uptime/Expires
>> Interface state: Interface, Next-Hop or VCD, State/Mode
>>
>> (*, 224.0.1.40 <http://224.0.1.40>), 00:35:00/00:02:35, RP
>> 0.0.0.0 <http://0.0.0.0>, flags: DCL
>> Incoming interface: Null, RPF nbr 0.0.0.0 <http://0.0.0.0>
>> Outgoing interface list:
>> Loopback0, Forward/Sparse, 00:35:00/00:02:35
>> P2#sh ip mroute
>> IP Multicast Routing Table
>> Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
>> C - Connected,
>> L - Local, P - Pruned, R - RP-bit set, F - Register flag,
>> T - SPT-bit set, J - Join SPT, M - MSDP created entry,
>> X - Proxy Join Timer Running, A - Candidate for MSDP
>> Advertisement,
>> U - URD, I - Received Source Specific Host Report,
>> Z - Multicast Tunnel, z - MDT-data group sender,
>> Y - Joined MDT-data group, y - Sending to MDT-data group
>> V - RD & Vector, v - Vector
>> Outgoing interface flags: H - Hardware switched, A - Assert winner
>> Timers: Uptime/Expires
>> Interface state: Interface, Next-Hop or VCD, State/Mode
>>
>> (*, 224.0.1.40 <http://224.0.1.40>), 1d18h/00:02:03, RP
>> 0.0.0.0 <http://0.0.0.0>, flags: DCL
>> Incoming interface: Null, RPF nbr 0.0.0.0 <http://0.0.0.0>
>> Outgoing interface list:
>> GigabitEthernet4/0/1, Forward/Sparse-Dense, 00:07:07/00:00:00
>>
>> *Sep 4 14:02:44.007: IP(0): s=150.1.5.5 <http://150.1.5.5>
>> (Loopback0) d=232.1.1.1 <http://232.1.1.1> id=5434,
>> ttl=255, prot=47, len=52(52), mroute olist null
>> PE2#sh ip mroute
>> IP Multicast Routing Table
>> Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group,
>> C - Connected,
>> L - Local, P - Pruned, R - RP-bit set, F - Register flag,
>> T - SPT-bit set, J - Join SPT, M - MSDP created entry,
>> X - Proxy Join Timer Running, A - Candidate for MSDP
>> Advertisement,
>> U - URD, I - Received Source Specific Host Report,
>> Z - Multicast Tunnel, z - MDT-data group sender,
>> Y - Joined MDT-data group, y - Sending to MDT-data group
>> V - RD & Vector, v - Vector
>> Outgoing interface flags: H - Hardware switched, A - Assert winner
>> Timers: Uptime/Expires
>> Interface state: Interface, Next-Hop or VCD, State/Mode
>>
>> (150.1.5.5 <http://150.1.5.5>, 232.1.1.1 <http://232.1.1.1>),
>> 00:08:46/00:02:46, flags: sPT
>> Incoming interface: Loopback0, RPF nbr 0.0.0.0
>> <http://0.0.0.0>, RPF-MFD
>> Outgoing interface list: Null
>>
>> (*, 224.0.1.40 <http://224.0.1.40>), 00:08:48/00:02:48, RP
>> 0.0.0.0 <http://0.0.0.0>, flags: DCL
>> Incoming interface: Null, RPF nbr 0.0.0.0 <http://0.0.0.0>
>> Outgoing interface list:
>> GigabitEthernet4/0/1, Forward/Sparse-Dense, 00:08:48/00:00:00
>>
>> Let me know if I can psot more information,
>>
>> thanks,
>>
>> Dale
>>
>>
>> Blogs and organic groups at http://www.ccie.net
>>
>>
>> _______________________________________________________________________
>> Subscription information may be found at:
>> http://www.groupstudy.com/list/CCIELab.html
Blogs and organic groups at http://www.ccie.net
This archive was generated by hypermail 2.1.4 : Sat Oct 04 2008 - 09:26:17 ART