Re: OT: Cisco NEXUS 7k vs Catalyst 6509E

From: --Hammer-- <bhmccie_at_gmail.com>
Date: Thu, 12 Aug 2010 13:10:26 -0500

Good stuff Michael. Thanks a lot. You didn't go with the 4710s for load
balancing. Was the feature set not there? Just an observation that you
trended away from Cisco for that component.

--Hammer--

On 8/12/2010 1:01 PM, Michael Marvel wrote:
> We are running a virtualized environment, we have both VMWare and
> Hyper-V running in our environment accounting for approximately
> 250-300 VS. We also have another 50 or so dedicated servers. All
> servers have dual port CNAs... one attached to each 5K of a pair. The
> NICs are teamed utilizing LACP. We do not currently have VDC turned
> on because we haven't had a need for it in the new DC, but it's always
> there if we need it. I did play with it at CPOC and it seemed to work
> as advertised.
> Basically, the design is as such:
> 2x 7Ks peered together using VPC
> 6x N5K dual homed to the core N7Ks and peered using VPC
> 14x N2K dual homed into a 5K pair. They are split into pods with 2
> pods consisting of 4x 2248s and a pair of N5Ks and a single pod of 6x
> 2248s and a pair of N5Ks. We are also running some native FC to the
> modules in the N5Ks as well as FCoE to the CNAs. We have a mixture of
> EMC SAN and HP iSCSI traffic.
> We have appliances for most of the services we need to provide. F5s
> for load balancing, ASA 5500s for FW, etc. Everything in the 7K is
> routed, 5Ks are strictly layer 2. The FEXs work extremely well.
> We have not run into a feature in NX-OS that we needed that isn't
> there FOR OUR ENVIRONMENT.
> Most of the issues we've run into were making things play nice
> together with things outside of Nexus. For example, if you use HP NIC
> teaming in a Hyper-V environment, it does Tx/Rx-Tx with the ports. If
> there is an F5 LTM in the data path, it'll break because of that so
> you have to use LACP. (Long story) We had a problem finding a CNA
> that would work with Hyper-V, NIC Teaming and VLAN trunking.
> Everybody had a card that worked great with VMWare but they all were
> behind on the drivers for the above mentioned environment. (Qlogic,
> Emulex and Brocade) We ended up finding that the HP version of the
> Emulex card works as ordered even though the Emulex branded card
> doesn't work.
> On Thu, Aug 12, 2010 at 12:42 PM, Ryan West <rwest_at_zyedge.com
> <mailto:rwest_at_zyedge.com>> wrote:
>
> Michael,
>
> > -----Original Message-----
> > From: nobody_at_groupstudy.com <mailto:nobody_at_groupstudy.com>
> [mailto:nobody_at_groupstudy.com <mailto:nobody_at_groupstudy.com>] On
> > Behalf Of Michael Marvel
> > Sent: Thursday, August 12, 2010 1:30 PM
> > To: --Hammer--
> > Cc: ccielab_at_groupstudy.com <mailto:ccielab_at_groupstudy.com>
> > Subject: Re: OT: Cisco NEXUS 7k vs Catalyst 6509E
> >
> > We are currently in the final stages of a greenfield datacenter
> buildout.
> > Then entire infrastructure is Nexus gear. 2x N7010s as the
> core, 6x N5020
> > (paired into 3 pairs) and 14x N2248 FEX. We looked at both the
> 6500 VSS
> > solution as well as the Nexus 7K before making our decision.
> The final
> > decision was based on a combination of things including hands on
> experience
> > at a CPOC lab and our environment.
> >
> > I'll tell you this, from a pure throughput standpoint, it
> definately lives up to
> > expectations. There are a very limited number of line cards
> available. Pretty
> > much a 32 port 10Gb card, 32 port 10Gb DCE card, 48 port
> > RJ-45 blade and a 48 port SFP blade for 10/100/1000
> connectivity. Cisco is
> > definately not stocking these things right now so if you have an
> RMA issue,
> > especially on the 7K, you might be in for a bit of a wait.
> >
> > Now, that being said, I do know of other major projects in my
> area using
> > Nexus and specifically the 7K. The State of Tennessee is
> rolling a bunch into
> > their new DC and a major hospital put some in as the core of the
> hospital
> > network. They are definately becoming more widely utilized.
> >
> > If you have any specific questions you can PM me so we don't
> fill up the CCIE
> > study group with off topic stuff. Unless, of course, anybody
> else is
> > interested.
> >
>
> I am interested in the overall design. In particular, are you
> using a VDC sandwich architecture? Do you have services switches
> or appliances? Transparent or routed mode? Is it for a
> virtualized environment?
>
> Thanks!
>
> -ryan

Blogs and organic groups at http://www.ccie.net
Received on Thu Aug 12 2010 - 13:10:26 ART

This archive was generated by hypermail 2.2.0 : Wed Sep 01 2010 - 11:20:52 ART