RE: Finding mcast rpf failures - Best Practices

From: ccie2be (ccie2be@nyc.rr.com)
Date: Tue Mar 29 2005 - 09:46:59 GMT-3


Hi Max,

Thanks, I think that's a great command to have in one's mcast
troubleshooting arsenal.

Can you explain what this line means?

RPF neighbor: ? (0.0.0.0) <-- Does 0.0.0.0 indicate a problem?

But, I'm still very troubled by this scenario.

I've thought about it quite a bit and can't figure out how there could even
be a rpf failure issue with this topology.

Joins start at the rp itself (due to the ip igmp join command on the rp
itself). The rp sends those joins towards the source to create a spt.

There is no possible spt cutover issue since there are no receivers
downstream from the rp.

And, as the joins are received by each router between the rp and the source,
the interface towards the rp should be added to the OIL and the interface
facing the source should become the incoming interface.

The only reasons I can imagine that would cause the pings from R1 to the
mcast groups on R4 to fail don't include rpf failure.

Do you agree? If not, where can you see a potential rpf failure?

Thanks again, Tim

-----Original Message-----
From: my-ccie-test [mailto:my-ccie-test@libero.it]
Sent: Tuesday, March 29, 2005 6:45 AM
To: ccie2be
Subject: RE: Finding mcast rpf failures - Best Practices

Hi Tim,
the first thing I would do is the command "show ip rpf x.x.x.x" on R3 where
x.x.x.x is the ip address of the source of the multicast stream (R2).
for example:

router#sh ip rpf 10.10.10.10
RPF information for ? (10.10.10.10)
  RPF interface: Serial0/0.1
  RPF neighbor: ? (0.0.0.0)
  RPF route/mask: 10.10.10.10/32
  RPF type: static
  RPF recursion count: 1
  Doing distance-preferred lookups across tables

as you can see in this example the rpf interface for the source 10.10.10.10
is s0/0.1, the rpf was determined by an mroute static.
This command shows to you the rpf check for the source that you have
indicated, so if your problem is due to the rpf check, this command may help
you.
you can check with the same command the rpf on R4, where you would see the
rpf through R3.
after that check the mroute table on R4 (where you have the rp), looking for
the multicast source, I think you'd see the registration of that multicast
source on the rp.

let me know if these commands help you.

bye
Max

---------- Initial Header -----------

From : "ccie2be" ccie2be@nyc.rr.com
To : my-ccie-test@libero.it
Cc :
Date : Sat, 26 Mar 2005 14:36:21 -0500
Subject : RE: Finding mcast rpf failures - Best Practices

> Hi Max,
>
> Thanks for your response.
>
> Let's say this is your mcast scenario:
>
> R1 -- R2 -- R3 -- R4-|
>
> R4 is the rp and ma for all mcast groups and it has several mcast
receivers
> off its lan interface (ip igmp join-group x.x.x.x).
>
> In addition, all of the above interfaces are ip pim sparse-dense-mode and
> your IGP is working fine. The output of show ip pim neighbor shows all
> neighbors as expected.
>
> A ping from R3 to mcast groups on r4 work.
>
> However, pings from R2 and R1 to the same mcast groups don't work.
>
> How would you go about troubleshooting this scenario?
>
> If the first thing you would do is use the mtrace command, on which router
> would you use it first and why? What would you look for in the output of
> the command?
>
> Also, in this scenario, is there anything you would do before using the
> mtrace command?
>
> Thanks again, this mcast stuff is driving me nuts. Every time I think I

> understand it, I find out how wrong I am.
>
> Tim
>
> -----Original Message-----
> From: my-ccie-test@libero.it [mailto:my-ccie-test@libero.it]
> Sent: Saturday, March 26, 2005 1:54 PM
> To: ccie2be@nyc.rr.com
> Subject: Re: Finding mcast rpf failures - Best Practices
>
>

> Hi Tim,
> I often check the multicast connectivity with mtrace command toward the
> source, when you do mtrace, it shows you the path to the multicast source
> and, as a normal trace do, also the link where it pass through.
> so, if the mtrace fail at the particular link, you have good probability
of
> identifie the router working bad.
>
> Hope is useful for you
> bye
> Max
>
>



This archive was generated by hypermail 2.1.4 : Sun Apr 03 2005 - 17:56:54 GMT-3