Yeah have to agree with the guys on this one. Generally DC to DC interconnects are L3, unless there is a design requirement to extend L2 networks between the Data Centres.
You wouldn't be dual homing switches between DCs.
What was the requirement that made you think VPC would be required between DCs?
Chris
> On 31 Mar 2014, at 7:49 am, Zack Tennant <ccie_at_tnan.net> wrote:
>
> Patrick has a point. Why would you make a VPC between them? Do you have a
> downlink (or uplink) switch that you need to VPC to? Only need to VPC the
> 7Ks together if your going to be having a switch connected to both.
>
> That said... you can make a VLAN that exists between them, which travels on
> a non-VPC/non-Peerlink path; make it have layer 3 interfaces, and use that
> for the keepalive.
>
>
> On Sun, Mar 30, 2014 at 2:33 PM, Tauseef Khan <tasneemjan_at_googlemail.com>wrote:
>
>> I have 2 N7Ks version 6.2 with F 248 xp-25 cards. I need to have 4 VDCs.
>> The N7ks are installed in 2 geographically separated data centres and there
>> is no other routed network between DCs. In this scenario what are the
>> options for VPC peer keep alive links.
>> I can't use the mgmt0 interface as the connection between DCs are all
>> single mode fibres. Do I need separate interfaces allocated to each VDC and
>> use the SVI for peer keepalive links within each VDC?
>>
>> Kind regards
>>
>> Tauseef
>>
>>
>> Blogs and organic groups at http://www.ccie.net
>>
>> _______________________________________________________________________
>> Subscription information may be found at:
>> http://www.groupstudy.com/list/CCIELab.html
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html
Blogs and organic groups at http://www.ccie.net
Received on Mon Mar 31 2014 - 08:06:32 ART
This archive was generated by hypermail 2.2.0 : Thu Apr 03 2014 - 17:12:31 ART