Re: Mcast RP redundancy Question

From: Pavel Bykov <slidersv_at_gmail.com>
Date: Thu, 16 Apr 2009 12:29:45 +0200

Dale, what I was talking about is longest match rule.
Here is quick example:
Requirements:
R1 needs to be primary for GROUP-SET 1 and backup for GROUP-SET 2
R2 needs to be primary for GROUP-SET 2 and backup for GROUP-SET 1

Then configure
R1:
ip pim send-rp-announce lo0 scope 255 group-list GROUP-SET 1
ip pim send-rp-announce lo1 scope 255

R2:
ip pim send-rp-announce lo0 scope 255 group-list GROUP-SET 2
ip pim send-rp-announce lo1 scope 255

Therefore, R1 will advertise SET1 and 224/4, R2 will advertise SET2 and
224/4

MA will select R1 for SET1, R2 for SET2 and select the highest IP for 224/4,
which is advertised by both.

When any other router wants to mroute for group from SET1, it will select R1
because SET1 is better match than 224/4
If R1 fails, 224/4 of R2 will take over for SET1 groups.

Same goes for R2 and SET2.

You

On Tue, Apr 14, 2009 at 9:28 AM, Dale Shaw <dale.shaw_at_gmail.com> wrote:

> Hi,
>
> On Tue, Apr 14, 2009 at 5:03 PM, Naveen <navin.ms_at_gmail.com> wrote:
> >
> > *Task*: R1 and R2 should be the RPs for their respective groups, but each
> > should serve as a backup for other.
> >
> > *Question is* - which protocol should I use to provide redundancy ? BSR,
> > Auto-RP or Anycast RP ?
> >
> > *1) In Auto-RP: *
> > Let R1 and R2 announce themselves as RPs for entire 224.0.0.0/8, but
> have
> > the MA select R1 for first set and R2 for second set.
> > If either R1 or R2 is dead, the MA will select the other as the RP for
> all
> > groups.
>
> Using Auto-RP, how would you configure the MA to give preference to
> one RP over another? You can't filter announcements on the MA, 'cause
> then you don't have any redundancy. The MA is going to choose the RP
> with the highest IP address. I think this rules out Auto-RP. One
> workaround might be to configure multiple 'ip pim send-rp-announce'
> commands on the RPs, each referencing different loopback interfaces.
>
> Example:
>
> R1:
> !
> interface lo0
> ip address 10.10.10.1 255.255.255.255
> interface lo1
> ip address 11.11.11.1 255.255.255.255
> !
> ip pim send-rp-announce lo0 scope 255 group-list GROUP-SET-2
> ip pim send-rp-announce lo1 scope 255 group-list GROUP-SET 1
> !
> ip access-list extended GROUP-SET-1
> permit 224.0.0.0 7.255.255.255
> ip access-list extended GROUP-SET-2
> permit 232.0.0.0 7.255.255.255
>
> R2:
> !
> interface lo0
> ip address 11.11.11.2 255.255.255.255
> interface lo1
> ip address 10.10.10.2 255.255.255.255
> !
> ip pim send-rp-announce lo0 scope 255 group-list GROUP-SET-2
> ip pim send-rp-announce lo1 scope 255 group-list GROUP-SET 1
> !
> ip access-list extended GROUP-SET-1
> permit 224.0.0.0 7.255.255.255
> ip access-list extended GROUP-SET-2
> permit 232.0.0.0 7.255.255.255
>
> This essentially uses the "highest IP address wins" rule to indirectly
> influence the decision the mapping agent is going to make.
>
> Using Anycast RP, which essentially involves a static RP configuration
> on the multicast routers pointing at a single IP address, with R1 and
> R2 configured with an MSDP peering and the same IP address on their
> respective loopbacks, how are you going to ensure that R1 serves as
> the RP for group set 1 and R2 handles group set 2?
>
> Despite being able to meet the requirements using Auto-RP as above
> (disclaimer: I haven't tested that), I think using BSR's built-in
> priority function is the most elegant solution. Happy to stand
> corrected; I'm working through a lot of this stuff at the moment too.
>
> cheers,
> Dale
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html
>
>
>
>
>
>
>
>

-- 
Pavel Bykov
----------------
Don't forget to help stopping the braindumps, use of which reduces value of
your certifications. Sign the petition at http://www.stopbraindumps.com/
Blogs and organic groups at http://www.ccie.net
Received on Thu Apr 16 2009 - 12:29:45 ART

This archive was generated by hypermail 2.2.0 : Mon May 04 2009 - 07:39:12 ART