Zack,
I have customers doing similar designs but l2 extension between datacenters should be on an as needed basis. Often times it becomes convenient to have and then gets abused by the server team. Shortly after the abuse starts to happen you have a DC meltdown not in just one but two DC's. Then all hell breaks loose of why did we have the same Bcast domain in two dcs for everything and depending on the company heads start rolling.
One of my Customers had a 12 hour partial brown out of there DCs do to the convenience factor that the server team took advantage of....
Your job as a network engineer is to keep the server team in check and protect them from themselves.
-----Original Message-----
From: nobody_at_groupstudy.com [mailto:nobody_at_groupstudy.com] On Behalf Of Zack Tennant
Sent: Sunday, March 30, 2014 8:59 PM
To: Chris Rae
Cc: Tauseef Khan; Cisco certification
Subject: Re: vpc peer keep alive link Nexus 7K
Well... We are doing L2 between our data centers, but not VPC. It's just an L2 extension to the other building. They are only 15 miles apart, so we're able to keep low latency and have a continuous L2 between them.
On Sun, Mar 30, 2014 at 8:06 PM, Chris Rae <chris.rae07_at_me.com> wrote:
> Yeah have to agree with the guys on this one. Generally DC to DC
> interconnects are L3, unless there is a design requirement to extend
> L2 networks between the Data Centres.
>
> You wouldn't be dual homing switches between DCs.
>
> What was the requirement that made you think VPC would be required
> between DCs?
>
> Chris
>
> > On 31 Mar 2014, at 7:49 am, Zack Tennant <ccie_at_tnan.net> wrote:
> >
> > Patrick has a point. Why would you make a VPC between them? Do you
> have a
> > downlink (or uplink) switch that you need to VPC to? Only need to
> > VPC
> the
> > 7Ks together if your going to be having a switch connected to both.
> >
> > That said... you can make a VLAN that exists between them, which
> > travels
> on
> > a non-VPC/non-Peerlink path; make it have layer 3 interfaces, and
> > use
> that
> > for the keepalive.
> >
> >
> > On Sun, Mar 30, 2014 at 2:33 PM, Tauseef Khan
> ><tasneemjan_at_googlemail.com
> >wrote:
> >
> >> I have 2 N7Ks version 6.2 with F 248 xp-25 cards. I need to have 4 VDCs.
> >> The N7ks are installed in 2 geographically separated data centres
> >> and
> there
> >> is no other routed network between DCs. In this scenario what are
> >> the options for VPC peer keep alive links.
> >> I can't use the mgmt0 interface as the connection between DCs are
> >> all single mode fibres. Do I need separate interfaces allocated to
> >> each VDC
> and
> >> use the SVI for peer keepalive links within each VDC?
> >>
> >> Kind regards
> >>
> >> Tauseef
> >>
> >>
> >> Blogs and organic groups at http://www.ccie.net
> >>
> >> ___________________________________________________________________
> >> ____ Subscription information may be found at:
> >> http://www.groupstudy.com/list/CCIELab.html
> >
> >
> > Blogs and organic groups at http://www.ccie.net
> >
> > ____________________________________________________________________
> > ___ Subscription information may be found at:
> > http://www.groupstudy.com/list/CCIELab.html
Blogs and organic groups at http://www.ccie.net
Received on Mon Mar 31 2014 - 09:58:35 ART
This archive was generated by hypermail 2.2.0 : Thu Apr 03 2014 - 17:12:31 ART