Re: Strange unicast problem when configuring mVPN

From: John Neiberger <jneiberger_at_gmail.com>
Date: Thu, 10 Jan 2013 10:09:42 -0700

Here is what it looks like after restarting all the routers. I still get
occasional packet loss that I can't explain, but it's not too bad:

R13#ping 14.14.14.14 rep 50

Type escape sequence to abort.

Sending 50, 100-byte ICMP Echos to 14.14.14.14, timeout is 2 seconds:

!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Success rate is 98 percent (49/50), round-trip min/avg/max = 164/205/248 ms

R13#ping 14.14.14.14 rep 50

Type escape sequence to abort.

Sending 50, 100-byte ICMP Echos to 14.14.14.14, timeout is 2 seconds:

!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!

Success rate is 96 percent (48/50), round-trip min/avg/max = 164/216/276 ms

R13#

I'll configure mdt again and then post my ping results...BRB :-)

Here are the changes on R1, one of my PE routers:

R1#conf t

Enter configuration commands, one per line. End with CNTL/Z.

R1(config)#int fa0/0

R1(config-if)#ip pim sparse

R1(config-if)#

*Mar 1 00:04:52.307: %PIM-5-NBRCHG: VRF A: neighbor 10.1.6.6 UP on
interface FastEthernet0/0

*Mar 1 00:04:52.347: %PIM-5-DRCHG: VRF A: DR change from neighbor 0.0.0.0
to 10.1.6.6 on interface FastEthernet0/0

R1(config-if)#exit

R1(config)#ip vrf A

R1(config-vrf)#mdt default 239.1.1.1

R1(config-vrf)#

*Mar 1 00:05:08.963: %LINEPROTO-5-UPDOWN: Line protocol on Interface
Tunnel0, changed state to up

R1(config-vrf)#

*Mar 1 00:05:10.699: %PIM-5-DRCHG: VRF A: DR change from neighbor 0.0.0.0
to 1.1.1.1 on interface Tunnel0

R1(config-vrf)#

R1(config-vrf)#

R1(config-vrf)#end

R1#

*Mar 1 00:05:16.255: %SYS-5-CONFIG_I: Configured from console by console

R1#

*Mar 1 00:06:15.315: %PIM-5-NBRCHG: VRF A: neighbor 7.7.7.7 UP on
interface Tunnel0

*Mar 1 00:06:15.423: %PIM-5-DRCHG: VRF A: DR change from neighbor 1.1.1.1
to 7.7.7.7 on interface Tunnel0

R1#

And here is R7, my other PE router:

R7#conf t

Enter configuration commands, one per line. End with CNTL/Z.

R7(config)#ip vrf A

R7(config-vrf)#mdt default 239.1.1.1

R7(config-vrf)#end

R7#

*Mar 1 00:06:17.343: %LINEPROTO-5-UPDOWN: Line protocol on Interface
Tunnel0, changed state to up

*Mar 1 00:06:18.323: %SYS-5-CONFIG_I: Configured from console by console

R7#

*Mar 1 00:06:19.183: %PIM-5-DRCHG: VRF A: DR change from neighbor 0.0.0.0
to 7.7.7.7 on interface Tunnel0

R7#

*Mar 1 00:06:45.799: %PIM-5-NBRCHG: VRF A: neighbor 1.1.1.1 UP on
interface Tunnel0

R7#

As soon as that last message shows that the tunnel between R7 and R1 is up,
look what happens to my unicast pings between R13 and R14:

R13#ping 14.14.14.14 rep 50

Type escape sequence to abort.

Sending 50, 100-byte ICMP Echos to 14.14.14.14, timeout is 2 seconds:

!..!..!!!....!!!...!..!...!!!...!!!..!!...!.!!!!!.

Success rate is 48 percent (24/50), round-trip min/avg/max = 184/287/1260 ms

R13#

So bizarre. I really have no idea what's happening to my little network.
lol

On Thu, Jan 10, 2013 at 10:08 AM, John Neiberger <jneiberger_at_gmail.com>wrote:

> I'm going to cross-post this post that I made to the Cisco Learning
> Network last night since I haven't been getting any hits on it there. Here
> is a link to that discussion if you want to see the topology or get the
> GNS3 project file with configs.
>
> I'm working on an mVPN lab in GNS3 and am running into a really bizarre
> problem. I've attached my topology. The gist of it is that R13 and R14 are
> customer routers that can ping each other just fine when I only have the
> basic L3VPN configuration in place, but things get weird quickly. For
> background, I have OSPF running in my customer areas and BGP is my PE-CE
> protocol. R1 and R7 are my vpnv4 peers.
>
> So, to configure mVPN, I started out by turning on PIM-SM on my P routers
> and made R3 the RP via BSR. Next, I configured PIM-SM in my customer areas.
> So far, no problem.
>
> Next I enabled PIM on the customer-facing interfaces on my PE routers.
> Still no problem. Then I configured the mdt default address in the vrf
> config and BLAMMO...broken unicast connectivity between 13 and 14. In the
> IOS image I'm running at the moment, my PE routers will immediately begin
> exchanging MDT information as soon as I configure the address in the vrf.
> They use the regular vpnv4 AF for this, unlike newer releases that use the
> ipv4 mdt AF.
>
> What in the world could cause something like this? I'm completely at a
> loss. I'm not even sure how to troubleshoot it since it's so bizarre. But
> once those MDT tunnels come up, things go bad fast.
>
> This is about the best it looks when the MDT tunnels are up:
>
> R13#ping 14.14.14.14 rep 50 time 5
>
> Type escape sequence to abort.
> Sending 50, 100-byte ICMP Echos to 14.14.14.14, timeout is 5 seconds:
> !!!!!!!!!!!!!!!...!!!!!.!!..!!!!!!!!!!!!!..!!..!.!
>
> Sometimes I get nearly no responses at all.
>
> Even stranger is that it seems like removing the mdt config and bouncing
> the PE BGP peers doesn't seem to resolve it reliably. So far, I've found
> nothing that fixes my unicast connectivity once it starts breaking. I'm
> going to try saving my configs and topology and then restart all of the
> routers.
>
> Have you ever seen anything like this? I'm totally stumped.
>
> Thanks,
> John

Blogs and organic groups at http://www.ccie.net
Received on Thu Jan 10 2013 - 10:09:42 ART

This archive was generated by hypermail 2.2.0 : Sun Feb 03 2013 - 16:27:17 ART