LOL! That's why I asked for clarification. J
The products he mentioned in his post aren't oversubbed. I don't have ESP
unfortunately. J
From: Scott Morris [mailto:smorris_at_ine.com]
Sent: Sunday, August 30, 2009 6:39 PM
To: Tony Varriale
Cc: ccielab_at_groupstudy.com
Subject: Re: Nexus in DC?
Ummm... LOTS. :)
Scott Morris, CCIEx4 (R&S/ISP-Dial/Security/Service Provider) #4713,
JNCIE-M #153, JNCIS-ER, CISSP, et al.
JNCI-M, JNCI-ER
evil_at_ine.com
Internetwork Expert, Inc.
http://www.InternetworkExpert.com
Toll Free: 877-224-8987
Outside US: 775-826-4344
Knowledge is power.
Power corrupts.
Study hard and be Eeeeviiiil......
Tony Varriale wrote:
Which Cisco product are you referring to that is oversubbed?
Sent from my iPhone
On Aug 30, 2009, at 5:09 PM, Rick Mur <mailto:rmur_at_ipexpert.com>
<rmur_at_ipexpert.com> wrote:
I will be installing the first UCS chassis' in the Netherlands within the
next month, of course with the UCS6k (Tweaked Nexus 5k). I also implemented
one of the first Nexus 5k's in NL last March :-)
So yes cool stuff! Still needs some time to get mature in terms of hardware
and software.
Juniper actually has a better DC switch with their EX8200 (totally
non-blocking), much cheaper for the same amount of bandwidth (not port
density, but the Juniper is NOT oversubscribed).
-- Regards, Rick Mur CCIE2 #21946 (R&S / Service Provider) Sr. Support Engineer IPexpert, Inc. URL: http://www.IPexpert.com On Sun, Aug 30, 2009 at 9:07 PM, Ronald Johns <mailto:rj686b_at_att.com> <rj686b_at_att.com> wrote: We're doing Nexus as well. In fact, we're likely ordering everything tomorrow. Keep in mind, there's a limit to 12 FEX's to a 5000 cluster, or at least that's what they tell me. They screwed up our first quote because our SE didn't look into this... -----Original Message----- From: nobody_at_groupstudy.com [mailto:nobody_at_groupstudy.com] On Behalf Of Omkar Tambalkar Sent: Thursday, August 27, 2009 5:12 PM To: Cisco certification Subject: Re: Nexus in DC? We are also rolling out the NX 7010 and 5020 in the next couple of months as part of our datacenter network upgrade. I am planning to use vPCs from the 5020 to 7010 but as we are using 2 VDCs there will be vPC for each VDC translating to 4 10g coming out of each 5020. I am sure we will have to get a extra 32 port 10g card with 8 ports dedicated for line speed for these cross connects. Not to mention you need 1 keep-alive and 1 cross-connect for each VDC. It seems that the 7010 can do lot of service aggregation using the ACE, WAAS modules but we just want to consolidate our aggregation switches in to single set of chasiss. On top of that in a few months time, we have provide active-active datacenter functionality with a set of NX 7010 at each datacenter so that traffic can be load balanced. And the training was a 4-day Firefly course...which was fairly basic in terms of depth and breadth of material covered Fun times....... -Omkar Tambalkar CCIE #24892 On Wed, Aug 26, 2009 at 7:23 AM, Tony Varriale <mailto:tvarriale_at_flamboyaninc.com> <tvarriale_at_flamboyaninc.com>wrote: Yup. If this is your first experience with any of those platforms I guarantee you are going to run into something not so pleasant. tv -----Original Message----- From: nobody_at_groupstudy.com [mailto:nobody_at_groupstudy.com] On Behalf Of Marc La Porte Sent: Wednesday, August 26, 2009 3:27 AM To: Cisco certification Subject: Nexus in DC? Hey guys, Any of you already having practical experience with big Nexus roll-outs in Data Centers? Running into any "problems"? For a customer we are planning to deploy 5020 end-of-row switches with 2148T top-of-rack switches, aggregated in 7018s fully loaded with 32-port 10-Gbps blades. Just to give you an idea, the complete data center would be around 256 rows (30,000 ports)... Cheers, Marc Blogs and organic groups at http://www.ccie.netReceived on Sun Aug 30 2009 - 20:48:58 ART
This archive was generated by hypermail 2.2.0 : Tue Sep 01 2009 - 05:43:57 ART