From: Mark Lasarko (mlasarko@co.ba.md.us)
Date: Tue Jul 25 2006 - 21:27:45 ART
I really like the 4500's (4506 and 4507R's w/ SupIV's, specifically) with the NetFlow Services Cards installed. This seems to work well to collapse access/distribution in high-density areas and allows us to maintain visibility into the network, while providing for redundancy where required (even for the NFCards). As a bonus we get to utilize our existing blades from the last generation 4006's. We use a mix of these in the core along with some x-crossed 6500's to handle redundancy others loaded with WS-X6748-GE-TX (1.3MB per port) and other "server-farm" blades for the heavier traffic... The mixed chassis deployment works for us today, and we expect to have ample room to scale tomorrow. Beyond that 7600's connect to our core PE 6500's w/ FWSM's installed as yet another demarcation point between our core and our MPLS cloud.
I have not seen enough traffic for any 10G, GigEtherchannel is more than enough for us, and keeps us from putting out the $'s for 10G links.
I don't"do" 3750's any more - melted down too many, gave up on that product line (especially the Metro). Combine that with the "lacking stacking-SPoF" technology and I would rather dual-home 3560's - YMMV, As they say; Opinions are like...
Looking forward I want to replace the 7600 PE devices feeding customer sites with ME6500's, which *should* be available within the next quarter or so a respectable price with appropriate access switches. Following this blueprint we not only maximize performance, but we also retain as much of our existing investment as possible in re-use of previously procured blades, we are able to secure/monitor the network and make every port VoIP ready.
My $.02
~M
>>> WorkerBee <ciscobee@gmail.com> 07/25/06 2:15 PM >>>
By connecting the servers directly to core/distribution, you need to administer
alot of Layer 2 issues such as HSRP, VTP domain, etc. Hence, make your Layer 3
core / distribution less scalable.
Hence I will prefer to offload these Layer 2 to dedicated pair of
Server Access switches
such as 6500 or 4500 with 10GE uplinks. Hence I run Layer 3 routing
between my server
farms switches and backbone.
Hence, I have the Layer 3 demarcation point between Server Farms and Backbone
which is useful for creating Security Policy.
On 7/26/06, Guyler, Rik <rguyler@shp-dayton.org> wrote:
> If you connect the servers into a single device of any sort it becomes a
> single point of failure. In our case, our servers are connected to two
> separate switches using a failover NIC team. But, that's somewhat beyond
> the scope of network design as such and should be a standard adopted by the
> server team provided the network design supports such initiatives.
>
> Rik
>
> -----Original Message-----
> From: James Ventre [mailto:messageboard@ventrefamily.com]
> Sent: Tuesday, July 25, 2006 1:32 PM
> To: Guyler, Rik
> Cc: 'ccielab@groupstudy.com'
> Subject: Re: What's your View about these
>
> I'd consider your 3750 "stack" a single point of failure, if you're
> using the stacking feature. I recently came across a scenario where
> the stacking software between the 3750's wasn't functioning and no traffic
> passed - in or out.
>
> James
>
>
>
> Guyler, Rik wrote:
> > Our server farm connects into the network at the distribution layer,
> > where we typically have better equipment and higher bandwidth
> > backplanes. In our case, we use 4500 switches with Sup4s, which has
> > been an excellent combination supporting over 300+ servers, mainframes,
> minis, AS400s, etc.
> >
> > The 3750 series switches should also be a pretty good solution in this
> > situation but the backplane will be much less than a more robust
> > chassis switch. Be conservative on the number of switches in a single
> > stack since I seem to recall the backplane in a stack runs at 32Gb.
> >
> > I would not directly connect anything directly into the core except
> > for distribution and other core switches. Sometimes the demarcation
> > point is not clearly defined so if your core and distribution layers
> > are collapsed into a single device or layer then really, from an
> > architectural perspective the 3750 stacks would be considered access
> > layer but the reality is that they are still only a single hop away
> > form the core so don't get too wrapped up into the terminology.
> >
> > Rik
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html
This archive was generated by hypermail 2.1.4 : Tue Aug 01 2006 - 07:13:48 ART