From: Larry Letterman (lletterm@cisco.com)
Date: Tue Dec 17 2002 - 04:52:08 GMT-3
We usually run anywhere from 10 to 100 servers per 65XX...
I dont consider the 35XX platform scalable or redundant...not for critical
issues like a DC that runs a companies business....
Just my .02....not a pissing match at all...
Ronald Fugate wrote:
> I agree with the power red.
>
> obviously all dc's in a production mode would be redundant power
> (utility, pdu, etc. down the server)
>
>
>
> We run dual power for all of our 6509's and 6's.
>
> and dual power btw of dc power for all 35xx.
>
>
>
> You didnt say how many servers per 6509 you would use.
>
>
>
> No one can argue the power of a 65xx to a 35xx.
>
>
>
> but I did not get the impression this was a spitting match.
>
>
>
> I was simply giving my experience, and was giving this guy an example.
>
> If you feel our environment is not configured as efficient as it
> should be, feel free to pass on the free 65xx's.
>
>
>
> I will gladly swap out the 35xx's :)
>
> -----Original Message-----
> From: Larry Letterman [mailto:lletterm@cisco.com]
> Sent: Tuesday, December 17, 2002 1:34 AM
> To: Ronald Fugate
> Cc: 'Chuck Church'; Bob Sinclair; ccielab@groupstudy.com
> Subject: Re: Gigastack - What is the point?
>
> our data centers are run by 6509's for each row of servers at
> layer 2...they uplink to a
> pair of 6509's running msfc-2 sup-2's with layer 3/hsrp . We feel
> that the data centers are
> housing P1 type servers, then the network infrastructure should
> have redundant power,
> redundant gateways and redundant supervisors in the L2
> switches....No one piece in this
> scenario will cause a server to to die....and the design is
> scalable by using L3 at the gateway
> end......
>
> Larry
>
> Ronald Fugate wrote:
>
>> We run 65xx in an hsrp config for layer 2/3, all db and core
>> servers get connection to core l2.
>>
>>
>>
>> 35xx are perfect for citrix and web access switches.
>>
>>
>>
>> our dc's each about 500 servers each in a fully redundant config.
>>
>>
>>
>>
>>
>> This is btw also the environment that cisco designed.
>>
>>
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Larry Letterman [ mailto:lletterm@cisco.com ]
>> Sent: Tuesday, December 17, 2002 1:15 AM
>> To: Ronald Fugate
>> Cc: 'Chuck Church'; Bob Sinclair; ccielab@groupstudy.com
>> Subject: Re: Gigastack - What is the point?
>>
>> We have a large data center for the engineering of Cisco at
>> the main Campus, where we have close to
>> 1000 servers, and we wont even entertain the thought of
>> stackables in the DC...L2 does not scale anywhere
>> near the range of L3, and the port density of 35XX's does not
>> get close to a chassis based system....
>>
>> Ronald Fugate wrote:
>>
>>>in addition to that:
>>>
>>>In a datacenter that; where hundreds of servers (blade servers, usually web environment) are required, the 3548's (or 3550 smi),in a redundant (layer 2) and teaming nics for end nodes, these switches, with gigastacks, are usually within a few feet of each other and are great. The gigastacks offer more flexibility than fiber stacks. The gigastacks can stack to the switches and leave the other gig slot open for other uses (trunks or gig access ports whatever).
>>>
>>>In our datacenters the gigastacks were alot more resilient (taking those unmentioned bumps from engineer running cables).
>>>
>>>And scalability is a big reason.
>>>
>>>
>>>-----Original Message-----
>>>From: Chuck Church [mailto:ccie8776@rochester.rr.com]
>>>Sent: Monday, December 16, 2002 8:59 PM
>>>To: Bob Sinclair; cci
>>>elab@groupstudy.com
>>>Subject: Re: Gigastack - What is the point?
>>>
>>>
>>>Bob,
>>>
>>> Price is probably a major reason. Last time I checked, the Gigastacks
>>>are cheaper than SX gbics. Also, a lot of companies stick with 2900 and
>>>3500s for closets. 4000s and up are considered distribution and core level
>>>switches, with a price to match. Price per port is much cheaper for 2900s
>>>and 3500s than a 4006 with sup 2 and line cards. Since most networks tend
>>>to grow rather than shrink, upgradibility is also a factor. Once you've
>>>maxed out a 4003 or 4006, you've got a big cost to add another chassis.
>>>With stackables, it's much cheaper. Of course there are networks out there
>>>that justify a 4000 or higher at the access layer, but those are special
>>>circumstances.
>>>
>>>Chuck Church
>>>CCIE #8776, MCNE, MCSE
>>>
>>>
>>>----- Original Message -----
>>>From: "Bob Sinclair" <!
>>>;b!
>>>sin
>>>@cox.net
>>>>
>>>To: <ccielab@groupstudy.com>
>>>Sent: Monday, December 16, 2002 8:53 PM
>>>Subject: OT: Gigastack - What is the point?
>>>
>>>
>>>>Switch gods:
>>>>
>>>>Any of you folks installed gigastack 35xx or 29xx? I really don't see
>>>>
>>>much
>>>
>>>>of an advantage to this technology, so I wonder what I am missing. Sure,
>>>>you can manage a bunch of switches with one IP address through a graphical
>>>>interface. BFD.
>>>>
>>>>The fast failover and minimal uplinks would be cool if you could stack
>>>>multiple switches on different floors, but as I read the specs, the
>>>>
>>>switches
>>>
>>>>must be within 1 meter of each other. If you need multiples of 48 ports
>>>>
>>>in
>>>
>>>>one closet, why not just use a modular switch?
>>>>
>>>>I have read the docs on CCO, but I don't really see what does this
>>>>technology really buys us, beyond a few corner cases. Any feedback or
>>>>
>>>links
>>>
>>>>appreciated.
>>>>
>>>>Bob Sinclair
>>>>CCIE #10427
>>>>.
>>>>
>>>.
>>>-------------------------------------------------------------------------------------
>>>
>>>The information contained in this message is proprietary of Amdocs,
>>>
>>>protected from disclosure, and may be privileged.
>>>
>>>The information is intended to be conveyed only to the designated recipient(s)
>>>
>>>of the message. If the reader of this message is not the intended recipient,
>>>
>>>you are hereby notified that any dissemination, use, distribution or copying of
>>>
>>>this communication is strictly prohibited and may be unlawful.
>>>
>>>If you have received this communication in error, please notify us immediately
>>>
>>>by replying to the message and deleting it from your computer.
>>>
>>>Thank you.
>>>
>>>-------------------------------------------------------------------------------------
>>>.
>>>______________________!
>>>__!
>>>br><
>>>a class="moz-txt-link-abbreviated" href="mailto:majordomo@groupstudy.com">majordomo@groupstudy.com with the body containing:
>>>unsubscribe ccielab
>>>
>>
>>
>>------------------------------------------------------------------------
>>
>>-------------------------------------------------------------------------------------
>>
>>The information contained in this message is proprietary of Amdocs,
>>
>>protected from disclosure, and may be privileged.
>>
>>The information is intended to be conveyed only to the designated recipient(s)
>>
>>of the message. If the reader of this message is not the intended recipient,
>>
>>you are hereby notified that any dissemination, use, distribution or copying of
>>
>>this communication is strictly prohibited and may be unlawful.
>>
>>If you have received this commun!
>>ic!
>>ation in error, please notify us immediately
>>
>>by replying to the message and deleting it from your computer.
>>
>>Thank you.
>>
>>-------------------------------------------------------------------------------------
>>
>
>
>------------------------------------------------------------------------
>
>-------------------------------------------------------------------------------------
>
>The information contained in this message is proprietary of Amdocs,
>
>protected from disclosure, and may be privileged.
>
>The information is intended to be conveyed only to the designated recipient(s)
>
>of the message. If the reader of this message is not the intended recipient,
>
>you are hereby notified that any dissemination, use, distribution or copying of
>
>this communication is strictly prohibited and may be unlawful.
>
>If you have received this communication in error, please notify us immediately
>
>by replying to the message and deleting it from your computer.
>
>Thank you.
>
>-------------------------------------------------------------------------------------
.
This archive was generated by hypermail 2.1.4 : Fri Jan 17 2003 - 17:21:47 GMT-3