I tell you, this whole Nexus / UCS line is so unbaked and there are very many
questions more than answers. I understand this is new technology - but I've
seen more issues and marketing vs actual engineering supported scenarios and
numbers than I would like to see for this. Lots of GREAT things are happening
within DC from Cisco but there is sooo much SMOKE - more than I've seen with
any other product line thus far. And I love Cisco's products.
> From: istong_at_stong.org
> To: adrianlazar_at_gmail.com; rmur_at_ipexpert.com
> CC: pbhatkoti_at_gmail.com; chaudri_at_gmail.com; ccielab_at_groupstudy.com
> Subject: RE: OT: Nexus deployment with 802.3ad LACP team
> Date: Sun, 28 Feb 2010 20:55:13 -0500
>
> For what it's worth the 2K's are fabric extenders and not full blown
> switches. You can't connect a switch to them and they don't do vPC. If you
> want redundancy then one option is to dual connect your server to two
> different 2K's (with each 2K going to a different 5K) and then configure
the
> server for NIC teaming.
>
> Alternatively you can single home your server to one 2K that connects to
two
> different 5K's . Then configure the 5K's with a vPC. Note: The 5K
currently
> has a 12 hardware etherchannel limit (but much higher software etherchannel
> limit).
>
> Yet another option is to connect your servers to two different 5K's (versus
> 2K's) and configure the 5K with a vPC so it looks like one switch to the
> downstream server. Switch or blade chassis switch. I like the 3120X 10Gig
> switches for uplinks to the 5K for performance as well as having the
ability
> to stack two 3120X switches together as one.
>
> There are a few other options as well but the above should give you an
idea.
>
>
> Thanks,
>
> Ian Stong
> www.CCIE4u.com
> Rack Rentals and discounted Lab Scenarios
>
>
>
>
> -----Original Message-----
> From: nobody_at_groupstudy.com [mailto:nobody_at_groupstudy.com] On Behalf Of
> Lazar Adrian
> Sent: Sunday, February 28, 2010 10:56 AM
> To: Rick Mur
> Cc: Radioactive Frog; Usama Pervaiz; Cisco certification
> Subject: Re: OT: Nexus deployment with 802.3ad LACP team
>
> Hi,
>
> The N2K is like a linecard for the Nexus 5k so it should support anything
> that the N5K supports (or most of it). vPC is a Nexus 5K feature and I am
> pretty sure that can be used through the 2K (when the server is connected
to
> the 2K via LACP port-channel).
> In most of the materials from Cisco regarding vPC is described the exact
> topology that Usama wants to use so if it's not working this is really a
big
> problem for Cisco :). One of the materials is here:
>
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configurat
> ion_guide_c07-543563.html
> Coincidentally, I am working on a similar setup using N5Ks and N2ks so I
> will check if this is actually working or not.
>
> Regards,
>
> Adrian
>
> On Sun, Feb 28, 2010 at 4:30 PM, Rick Mur <rmur_at_ipexpert.com> wrote:
>
> > I indeed can't imagine that the N2k support the vPC feature yet. You can
> do
> > teaming on the server though. Teaming on servers is usually just using 1
> > interface and the second as a backup. The server really has to support
> > balancing, like ESX (not by default as well).
> >
> > Then again. AFAIK it's not possible to configure a vPC on 2 N2k's.
> >
> >
> > --
> > Regards,
> >
> > Rick Mur
> > CCIE2 #21946 (R&S / Service Provider)
> > Sr. Support Engineer IPexpert, Inc.
> > URL: http://www.IPexpert.com
> >
> > On 28 feb 2010, at 13:11, Radioactive Frog wrote:
> >
> > > thanks, let us know how did you go.
> > >
> > > On Sun, Feb 28, 2010 at 5:47 AM, Usama Pervaiz <chaudri_at_gmail.com>
> > wrote:
> > >
> > >> We are using the management vrf for the keepalives. otherwise its the
> > >> default vrf. We didnt want to get too fancy with the config as we dont
> > >> need it. Another member on the list replied and said that the 2K's
> > >> currently do not support this yet. I will have to call Cisco and get
> > >> more info.
> > >>
> > >> Thank you all for your help!!
> > >>
> > >> Usama
> > >>
> > >> On Sat, Feb 27, 2010 at 7:33 AM, Radioactive Frog
<pbhatkoti_at_gmail.com>
> > >> wrote:
> > >>> Hi,
> > >>>
> > >>> The config looks okay. I'd add this, just to make sure the right vrf
> > (if
> > >>> you're using any).
> > >>>
> > >>> vpc domain 1
> > >>> peer-keepalive destination 10.0.0.1 source 10.10.0.2 vrf
> vpc-keepalive
> > >>>
> > >>> Are you issueing show vpc status command from base system i.e. mgmt
> vdc
> > >> or
> > >>> one of the vdc which has peering with another one on a different box?
> > >>>
> > >>>
> > >>> use switchto vdc <vdcName> command and then check the vpc status.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On Sat, Feb 27, 2010 at 4:27 PM, Usama Pervaiz <chaudri_at_gmail.com>
> > >> wrote:
> > >>>>
> > >>>> Following is how i configured the vPC:
> > >>>>
> > >>>> 5K-A
> > >>>>
> > >>>> vpc domain 1
> > >>>> peer-keepalive destination 10.0.0.1
> > >>>>
> > >>>> interface port-channel1
> > >>>> switchport mode trunk
> > >>>> vpc peer-link
> > >>>> spanning-tree port type network
> > >>>> speed 10000
> > >>>>
> > >>>> interface port-channel10
> > >>>> switchport access vlan 100
> > >>>> vpc 10
> > >>>> speed 1000
> > >>>>
> > >>>> interface Ethernet1/17
> > >>>> switchport mode trunk
> > >>>> channel-group 1 mode active
> > >>>>
> > >>>> interface Ethernet1/18
> > >>>> switchport mode trunk
> > >>>> channel-group 1 mode active
> > >>>>
> > >>>> int e100/1/10
> > >>>> switchport access vlan 100
> > >>>> spanning-tree port type edge
> > >>>> channel-group 10 mode active
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> This is the config on 5K-B:
> > >>>>
> > >>>> vpc domain 1
> > >>>> peer-keepalive destination 10.0.0.2
> > >>>>
> > >>>> interface port-channel1
> > >>>> switchport mode trunk
> > >>>> vpc peer-link
> > >>>> spanning-tree port type network
> > >>>>
> > >>>> interface port-channel10
> > >>>> switchport access vlan 100
> > >>>> vpc 10
> > >>>> speed 1000
> > >>>>
> > >>>> interface Ethernet1/17
> > >>>> switchport mode trunk
> > >>>> channel-group 1 mode active
> > >>>>
> > >>>>
> > >>>> interface Ethernet1/18
> > >>>> switchport mode trunk
> > >>>> channel-group 1 mode active
> > >>>>
> > >>>> interface Ethernet100/1/10
> > >>>> switchport access vlan 100
> > >>>> spanning-tree port type edge
> > >>>> channel-group 10 mode active
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> sh vpc brief
> > >>>> Legend:
> > >>>> (*) - local vPC is down, forwarding via vPC peer-link
> > >>>>
> > >>>> vPC domain id : 1
> > >>>> Peer status : peer adjacency formed ok
> > >>>> vPC keep-alive status : peer is alive
> > >>>> Configuration consistency status: success
> > >>>> vPC role : secondary
> > >>>>
> > >>>> vPC Peer-link status
> > >>>>
---------------------------------------------------------------------
> > >>>> id Port Status Active vlans
> > >>>> -- ---- ------
--------------------------------------------------
> > >>>> 1 Po1 up 1,14-15,100
> > >>>>
> > >>>> vPC status
> > >>>>
> > >>>>
> > >>
> >
>
----------------------------------------------------------------------------
> > >>>> id Port Status Consistency Reason
> > Active
> > >>>> vlans
> > >>>> ------ ----------- ------ ----------- --------------------------
> > >>>> -----------
> > >>>> 100 Po100 up success success 100
> > >>>> 114 Po114 down* failed Consistency Check Not -
> > >>>> Performed
> > >>>>
> > >>>>
> > >>>> I dont understand why the status says down and the consistency says
> > >>>> failed. Please let me know if i configured something wrong or if i
> > >>>> need addition steps.
> > >>>>
> > >>>> Thanks,
> > >>>> Usama
> > >>>>
> > >>>>
> > >>>> On Fri, Feb 26, 2010 at 10:47 PM, Radioactive Frog <
> > pbhatkoti_at_gmail.com
> > >>>
> > >>>> wrote:
> > >>>>> My question is do i even need to configure anything if the servers
> > are
> > >>>>> doing NIC teaming?
> > >>>>> ----------------------------------------------------------------
> > >>>>> ah...
> > >>>>> the answer is simply "yes", you need to configure 802.13ad on your
> > >>>>> switch as
> > >>>>> well where those 2 teamed NIC's are connected. If this is not done
> > >> then
> > >>>>> the
> > >>>>> incoming traffic to server will not load balanced.
> > >>>>>
> > >>>>> Server to Switch traffic flow:
> > >>>>> --------------------------------------
> > >>>>> Outgoing from server = Teaming servers ports, if servers are Linux
> it
> > >>>>> creates a "BOND0" interface which is logically interface created by
> > >> the
> > >>>>> OS
> > >>>>> when teaming is enabled. The OS sends any traffic originated from
> (in
> > >>>>> virtulization environment VM's) OS through the BOND0 interface out
> to
> > >> a
> > >>>>> switch.
> > >>>>>
> > >>>>> same concept applies fomr incoming traffic.
> > >>>>>
> > >>>>> HTH
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>> On Sat, Feb 27, 2010 at 2:24 PM, Usama Pervaiz <chaudri_at_gmail.com>
> > >>>>> wrote:
> > >>>>>>
> > >>>>>> The uplink vPC to the 6509's is working. The VSS consists of 2
> > 6509's
> > >>>>>> which are multi-chassis portchannel to the 2 5K's (which are using
> > >>>>>> vPC) My concern is at the host level. If i configure the
> > port-channel
> > >>>>>> to the servers using LACP the port channel says its down and the
> > port
> > >>>>>> itself is in Individual mode (I). If i configure it with just this
> > >>>>>> command (as stated in the quick start guide for vPC)
> > >>>>>>
> > >>>>>> channel-group 10
> > >>>>>>
> > >>>>>> The port-channel shows up but I do not see any increase in
> > throughput
> > >>>>>> from the server.
> > >>>>>>
> > >>>>>> My question is do i even need to configure anything if the servers
> > >> are
> > >>>>>> doing NIC teaming?
> > >>>>>>
> > >>>>>> Thanks for your response.
> > >>>>>>
> > >>>>>> Usama
> > >>>>>>
> > >>>>>> On Fri, Feb 26, 2010 at 10:11 PM, Radioactive Frog
> > >>>>>> <pbhatkoti_at_gmail.com>
> > >>>>>> wrote:
> > >>>>>>> vPC is not supported within the BOX (even it has 2 vdc/vss). You
> > >> must
> > >>>>>>> have 2
> > >>>>>>> physical boxes i.e. 2x6509.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> On Sat, Feb 27, 2010 at 11:08 AM, Usama Pervaiz
<chaudri_at_gmail.com
> > >>>
> > >>>>>>> wrote:
> > >>>>>>>>
> > >>>>>>>> Hello all,
> > >>>>>>>>
> > >>>>>>>> We are rolling out a deployment of Nexus 5k's and 2k's with VSS.
> > >>>>>>>> following is the design that we have chosen. The
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> ___________
> > >>>>>>>> N2K1------>N5K1 --------------- | 6509 |
> > >>>>>>>> Server< || -----vPC---- | VSS |
> > >>>>>>>> LACP N2K2------>N5K2 -------------- |___________|
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> The N5K1 and N5K2 are using vPC to connect to the 6509's. This
> > >> link
> > >>>>>>>> is
> > >>>>>>>> up and functioning properly.
> > >>>>>>>> My question is if we are using active/active LACP from the
server
> > >>>>>>>> side
> > >>>>>>>> NIC's do we have to configure port-channel on the N2K1 and N2K2?
> I
> > >>>>>>>> have tried it with and without and in both cases my maximum
> > >>>>>>>> throughput
> > >>>>>>>> is just below 1Gbps. I would have though that teaming up the
> NIC's
> > >>>>>>>> on
> > >>>>>>>> the server would give me at least something over a gig of
> > >> throughput
> > >>>>>>>> or am I wrong in my assumption.
> > >>>>>>>>
> > >>>>>>>> Also when I specify the mode for the channel-group as active
> > >>>>>>>> (channel-group 10 mode active) my port-channel status shows as
> > >> down
> > >>>>>>>> and my port status is (I). Is LACP not supported on host ports
> for
> > >>>>>>>> the
> > >>>>>>>> N2K's?
> > >>>>>>>>
> > >>>>>>>> If I have confused anyone then I apologize!!
> > >>>>>>>>
> > >>>>>>>> Thanks in advance!
> > >>>>>>>> Usama
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> Blogs and organic groups at http://www.ccie.net
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>
Received on Mon Mar 01 2010 - 03:02:34 ART
This archive was generated by hypermail 2.2.0 : Thu Apr 01 2010 - 07:26:34 ART