Thanks, Joe. I think you may be right. I couldn't think of a single thing I
did that would cause that sort of behavior. It's kind of a bummer because
I'd really like to get this lab up and running. I may try again with a
smaller number of routers, or a different mix of router models. Maybe I'll
end up on a combination that doesn't do this.
John
On Thu, Jan 10, 2013 at 3:54 PM, Joe Sanchez <marco207p_at_gmail.com> wrote:
> John, problem is more than likely with GNS3 I don't use it much because I
> use to always have dropped packets on ICMP multicasting and other
> protocols...
>
> Regards,
> Joe Sanchez
>
> ( please excuse the brevity of this email as it was sent via a mobile
> device. Please excuse misspelled words or sentence structure.)
>
> On Jan 10, 2013, at 11:08 AM, John Neiberger <jneiberger_at_gmail.com> wrote:
>
> > I'm going to cross-post this post that I made to the Cisco Learning
> Network
> > last night since I haven't been getting any hits on it there. Here is a
> > link to that discussion if you want to see the topology or get the GNS3
> > project file with configs.
> >
> > I'm working on an mVPN lab in GNS3 and am running into a really bizarre
> > problem. I've attached my topology. The gist of it is that R13 and R14
> are
> > customer routers that can ping each other just fine when I only have the
> > basic L3VPN configuration in place, but things get weird quickly. For
> > background, I have OSPF running in my customer areas and BGP is my PE-CE
> > protocol. R1 and R7 are my vpnv4 peers.
> >
> > So, to configure mVPN, I started out by turning on PIM-SM on my P routers
> > and made R3 the RP via BSR. Next, I configured PIM-SM in my customer
> areas.
> > So far, no problem.
> >
> > Next I enabled PIM on the customer-facing interfaces on my PE routers.
> > Still no problem. Then I configured the mdt default address in the vrf
> > config and BLAMMO...broken unicast connectivity between 13 and 14. In the
> > IOS image I'm running at the moment, my PE routers will immediately begin
> > exchanging MDT information as soon as I configure the address in the vrf.
> > They use the regular vpnv4 AF for this, unlike newer releases that use
> the
> > ipv4 mdt AF.
> >
> > What in the world could cause something like this? I'm completely at a
> > loss. I'm not even sure how to troubleshoot it since it's so bizarre. But
> > once those MDT tunnels come up, things go bad fast.
> >
> > This is about the best it looks when the MDT tunnels are up:
> >
> > R13#ping 14.14.14.14 rep 50 time 5
> >
> > Type escape sequence to abort.
> > Sending 50, 100-byte ICMP Echos to 14.14.14.14, timeout is 5 seconds:
> > !!!!!!!!!!!!!!!...!!!!!.!!..!!!!!!!!!!!!!..!!..!.!
> >
> > Sometimes I get nearly no responses at all.
> >
> > Even stranger is that it seems like removing the mdt config and bouncing
> > the PE BGP peers doesn't seem to resolve it reliably. So far, I've found
> > nothing that fixes my unicast connectivity once it starts breaking. I'm
> > going to try saving my configs and topology and then restart all of the
> > routers.
> >
> > Have you ever seen anything like this? I'm totally stumped.
> >
> > Thanks,
> > John
> >
> >
> > Blogs and organic groups at http://www.ccie.net
> >
> > _______________________________________________________________________
> > Subscription information may be found at:
> > http://www.groupstudy.com/list/CCIELab.html
Blogs and organic groups at http://www.ccie.net
Received on Thu Jan 10 2013 - 17:09:36 ART
This archive was generated by hypermail 2.2.0 : Sun Feb 03 2013 - 16:27:17 ART