From: Ravi (s_ravichandran@xxxxxxxxxxx)
Date: Sun Jan 13 2002 - 13:15:31 GMT-3
Do you consider the middle router as hop. It connects the other routers on
its two different physical interfaces.
I tried by configuring the middle router(R1) as mapping agent, it did not
work. and also made R4 as mapping agent, this also did not work.
I have not put nbma mode on the frame relay int as it says that not
recommended for sparse dense mode.
Regards,
Ravi
----- Original Message -----
From: "tom cheung" <tkc9789@hotmail.com>
To: <Kivas.Waters@Honeywell.com>; <s_ravichandran@hotmail.com>;
<ccielab@groupstudy.com>; <bhescock@cisco.com>
Sent: Sunday, January 13, 2002 11:01 AM
Subject: RE: Subject: multicast tip and Multicast Sparse-Dense
> The reason why mapping agent has to be configured at or behind the hub is
well explained in Chapter 15 of Beau Williamson's "Developing IP Multicast
Networks".
>
>
> >From: "Waters, Kivas (UK72)" <Kivas.Waters@Honeywell.com>
> >Reply-To: "Waters, Kivas (UK72)" <Kivas.Waters@Honeywell.com>
> >To: Ravi <s_ravichandran@hotmail.com>, ccielab@groupstudy.com,
bhescock@cisco.com
> >Subject: RE: Subject: multicast tip and Multicast Sparse-Dense
> >Date: Sun, 13 Jan 2002 11:43:47 +0100
> >
> >Ravi, I read somewhere that you should put the RP mapping agent on the FR
> >hub. I'm not sure why this was and it does not make any sense to me but
if
> >you try this and it works, please let me know.
> >
> >regards
> >
> >Ki
> >
> >-----Original Message-----
> >From: Ravi [mailto:s_ravichandran@hotmail.com]
> >Sent: 13 January 2002 04:42
> >To: ccielab@groupstudy.com; bhescock@cisco.com
> >Subject: Subject: multicast tip and Multicast Sparse-Dense
> >
> >
> >Hi,
> >
> >I am doing some multicast labs, finding it very hard to understand. I
need
> >some help please....
> >
> >I took a example discussed here a month ago (attached below)
> >
> >I get the same result as what Sijbren Beukenkamp experienced. After
spending
> >hours and hours on the lab and Cisco site, could not get it working. I
would
> >like to bring this topic back and get some solution to this problem.
> >
> >Regards,
> >Ravi
> >
> >
> >
> >Consider the following.
> > > >
> > > > Three routers R2, R1 and R4. All interconnected
> > > using FR. R1 is in the
> > > > centre and will have to route Multicast between
> > > its serial interface to
> > > R2
> > > > and R4. R2 and R4 have a tokenring interfaces at
> > > which SD-mode is
> > > > configured. The same is for all the serial
> > > interfaces.
> > > >
> > > >
> > >
>
>LAN--------R2-------------FR---------------R1----------FR---------------R4-
-
> >-
> >-----LAN
> > > >
> > > > IP MULTICAST-ROUTING IP
> > > MULTICAST-ROUTING IP
> > > MULTICAST-ROUTING
> > > > PIM-S-D
> > > PIM-S-D PIM-S-D
> > > > igmp-join 224.1.1.1
> > > > 224.1.1.1 announced by RP
> > > > igmp-join 224.2.2.2
> > > >
> > > > R2 is RP and mapping agent ONLY for 224.1.1.1. So
> > > basicly 224.1.1.1 is
> > > > using Sparse-mode and 224.2.2.2 is using Dense !
> > > > PIM Sparse Dense is configured on all Frame-relay
> > > interfaces and the
> > > > tokenring interfaces.
> > > > On the tokenring interface of R2 are the two igmp
> > > joins configured.
> > > >
> > > > R1 and R4 are showing the RP (sh ip pim rp
> > > mapping)
> > > > R1 can ping 224.1.1.1 and 224.2.2.2
> > > > R4 can only ping 224.2.2.2
> > > >
> > > >
> > > > What's wrong with this picture !
> >
> >==========================================================
> >
> >
> > >>The previous discussion about multicast made me think about a problem
I
> > > >>see occasionally and I thought I'd pass it along. Don't use
anything in
> > > >>the 224.0.0.x range for a multicast address. It will work fine if
the
> > > >>source and destination are in the same vlan (unless you're using one
of
> > > >>the reserved addresses, such as 224.0.0.10 for eigrp, which would
> > > >>probably wouldn't be a good thing to do... ;-). The reason it
doesn't
> > > >>work when routing multicast is the 224.0.0.x is a "link-local"
address,
> > > >>it never gets forwarded off the local segment, you will never get ip
> > > >>multicast for 224.0.0.x to work across a router unless you bridged
it
> > > >>(haven't tried it but it should work).
> > > >>
> > > >>Most people wouldn't use 224.0.0.x but I see it happen occasionally
and
> > > >>wanted to help save some people the grief of troubleshooting the
problem
> > > >>if you used that range of addresses by mistake. Another common
problem
> > > >>in production networks is many multicast servers have a default ttl
of 1
> > > >>and, since one of the first things a router does is decrement the
ttl by
> > > >>one, the packets get dropped at the router. The solution is to
increase
> > > >>the ttl of the multicast server to be at least one higher than the
> > > >>number of hops to the furtherest multicast receiver.
> > > >>
> > > >>Brian
> >============================================================
> >Brian Hescock <bhescock@cisco.com>
> > > Sent by: nobody@groupstudy.com
> > > 31-10-2001 14:26
> > > Please respond to Brian Hescock
> > >
> > >
> > > To: Sijbren
> > > Beukenkamp/Netherlands/IBM@IBMNL
> > > cc: ccielab@groupstudy.com
> > > Subject: Re: Multicast Sparse-Dense
> > > Mode
> > >
> > >
> > >
> > > Pinging from R1 means the packets will be
> > > process-switched. When pinging
> > > from R4 the packets on R1
> > > will be fast-switched. Configure "no ip
> > > mroute-cache" on the outgoing
> > > interface on R1 and then see
> > > if you can ping from R4. If not, put an ip igmp
> > > join-group on the
> > > outgoing interface on R1 and see
> > > if you can ping from R4 and get a response from R1.
> > >
> > > There have been several multicast fast-switching
> > > bugs where sometimes you
> > > don't even get an
> > > interface in the OIL when you have a join-group
> > > command on that interface.
> > > As a workaround, what
> > > will often work is removing join-group and remove
> > > pim, let pim timeout
> > > then put the commands back
> > > on. If that doesn't work, do the same thing again
> > > but once the commands
> > > are off the interface do a
> > > "shut", wait a few seconds, do a "no shut" and put
> > > the commands back on
> > > the interface. You should
> > > then see the outbound interface in the OIL. This
> > > isn't something you
> > > should have to do, just a
> > > workaround due to a multicast fast-switching bug.
> > >
> > > Brian
> > >
> > > Sijbren Beukenkamp wrote:
> > >
> > > > Consider the following.
> > > >
> > > > Three routers R2, R1 and R4. All interconnected
> > > using FR. R1 is in the
> > > > centre and will have to route Multicast between
> > > its serial interface to
> > > R2
> > > > and R4. R2 and R4 have a tokenring interfaces at
> > > which SD-mode is
> > > > configured. The same is for all the serial
> > > interfaces.
> > > >
> > > >
> > >
>
>LAN--------R2-------------FR---------------R1----------FR---------------R4-
-
> >-
> >-----LAN
> > > >
> > > > IP MULTICAST-ROUTING IP
> > > MULTICAST-ROUTING IP
> > > MULTICAST-ROUTING
> > > > PIM-S-D
> > > PIM-S-D PIM-S-D
> > > > igmp-join 224.1.1.1
> > > > 224.1.1.1 announced by RP
> > > > igmp-join 224.2.2.2
> > > >
> > > > R2 is RP and mapping agent ONLY for 224.1.1.1. So
> > > basicly 224.1.1.1 is
> > > > using Sparse-mode and 224.2.2.2 is using Dense !
> > > > PIM Sparse Dense is configured on all Frame-relay
> > > interfaces and the
> > > > tokenring interfaces.
> > > > On the tokenring interface of R2 are the two igmp
> > > joins configured.
> > > >
> > > > R1 and R4 are showing the RP (sh ip pim rp
> > > mapping)
> > > > R1 can ping 224.1.1.1 and 224.2.2.2
> > > > R4 can only ping 224.2.2.2
> > > >
> > > >
> > > > What's wrong with this picture !
> > > >
> > > > Met vriendelijke groet/ Kind regards,
> > > > Sijbren
> > > >
> > > ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > > Ing. Sijbren Beukenkamp IBM Global Services
> > > > Internetworking Specialist Integrated
> > > Technology Services
> > > >
> > > Networking & Connectivity
> > > > Services
> > > >
> > > http://www.ibm.com/services/nc/
> > > > GSM +31 (0)6 53761703 The Netherlands
> > > > Office: +31 (0)30 2850 666
> > >
This archive was generated by hypermail 2.1.4 : Thu Jun 13 2002 - 10:56:26 GMT-3