Re: frame relay hub and spoke multicast

From: Petr Lapukhov (petrsoft@gmail.com)
Date: Sun Mar 26 2006 - 10:33:16 GMT-3


Not quite that.

You see, pim nbma mode really helps solving that OLIST problem.

e.g. we have R1, R2, R3

R1 is hub, R2,R3 spokes, RP is on R1.

R1, R2,R3 join their loopbacks to 239.1.1.1

no if we enable NBMA mode on R1 multipoint interface,
R1 will track every joined neighbor on NBMA interface separtately,
as it they were on individual p2p links.

And if R2 pings 239.1.1.1, ping will reach R1 and R3, because of NBMA mode.

In this case, mroute table on R1 would look like (if we ping 239.1.1.1 from
R2,
and 172.16.123.0/24 is common subnet on NBMA interface, R1 is RP)

(*, 239.1.1.1), 00:04:02/00:03:23, RP 172.16.101.1, flags: SJCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0.123, 172.16.123.3, Forward/Sparse, 00:01:06/00:03:23
    Serial0/0.123, 172.16.123.2, Forward/Sparse, 00:01:37/00:02:54
    Loopback101, Forward/Sparse, 00:04:02/00:02:03

(141.34.102.1, 239.1.1.1), 00:01:25/00:02:51, flags: LT
  Incoming interface: Serial0/0.123, RPF nbr 172.16.123.2
  Outgoing interface list:
    Serial0/0.123, 172.16.123.3, Forward/Sparse, 00:01:07/00:03:22
    Loopback101, Forward/Sparse, 00:01:25/00:02:02

HTH
Petr

2006/3/26, david robin <robindavi@gmail.com>:
>
> Petr,
> thanks a lot for you reply it realy help me, but there is point, if I run
> sparse mode and run pim nbma mode, all the routers will be aware of the RP
> but if you ping multicast from a spoke router to the other, the problem
will
> happend,
> Am i right ?
> so what we will need in the two case is tunnel interface, or pim bidr in
> order to solve the problem.
>
>
>
> On 3/26/06, Petr Lapukhov <petrsoft@gmail.com> wrote:
> >
> > Hello David,
> >
> > If you run PIM Dense mode over FR H&S topology, you basically
> > face an OLIST problem when you try to reach group joined on spoke
> > from another spoke.
> >
> > Hub won't forward mpacket on the same interface it was received from.
> > This problem could be solved with single tunnel (either R1-R2, R2-R3,
> > R1-R3)
> > and proper static mroute on _receiving_ side of the tunnel (to prefer
> > tunnel over
> > physical interface for sure).
> >
> > Now if you run PIM Sparse over H&S, you may face the same problem,
> > and more.
> >
> > 1) You may miss BSR/AutoRP announces due to OLIST problem.
> > 2) You may be unable to join shared tree if RP is places improperly
> > (spoke)
> > 3) You may face OLIST problem with mpacket delivery (the same as above).
> >
> > Leaving 1,2 aside, lets look at 3.
> >
> > For that mpacket OLIST problem, you may basically:
> >
> > 1) use tunnel/mroute, as in PIM-DM
> > 2) use "pim nbma-mode" on hub multipoint interface,
> > 3) enable PIM bidir mode to use bidirectional shared tree and place RP
> > on hub
> >
> > HTH
> > Petr
> >
> > 2006/3/26, david robin < robindavi@gmail.com>:
> > >
> > > dear all,
> > I want to very something with ip multicast, if we have a frame-relay
> > connection, R1 is the hub connected wih a multipoint interface to r2 and
> > r3
> > the spokes, the qusetion is if we had a multicast group on r2 fast
> > ethernet
> > ( using ip igmp join), the fact that I cant ping this multicast group
> > from
> > r3, does i have to configure tunnels ( 1 tunnel between r1 and r2 and 1
> > tunnel between r1 and r3) in order to make the multicast packets flow ??
> >
> >
> > Also there is another question in case i run pim sparse-dense as r1 is
> > the
> > RR the same problem will occur and does i have to configure tunnel too
> > ???
> >
> > _______________________________________________________________________
> > Subscription information may be found at:
> > http://www.groupstudy.com/list/CCIELab.html



This archive was generated by hypermail 2.1.4 : Sat Apr 01 2006 - 10:07:40 GMT-3