FW: What's your View about these

From: Guyler, Rik (rguyler@shp-dayton.org)
Date: Wed Jul 26 2006 - 09:54:21 ART


I very much tend to agree with not preferring 3750s here but the original
poster stated that's what they are going with and I am not the one to try to
change that decision. Personally, I have had good success with the 3750
line but I have not tried to get "fancy" with them either. Very small
network core/dist layer responsibilities or large L3 concentratration for
the access layer is about all I would throw at these.

With 60-something 4000/4500s in my network (Sup4), I can say that I like
them pretty well. In some of our dist layers, we throw a lot (I mean a
lot...like radiology images) of traffic at them and they just keep chugging
along. We also use them in our data centers for the server farms. I would
have preferred 6500s for this use but the 4500s were here before me. Our
core is exclusively 6500s with the 4500s comprising our distribution layers,
all fully redundant (ever swap a Sup in the middle of the day and not get
yelled at?).

We too are looking at replacing all of our 7200VXRs (and one old 7500) with
6500s for our Metro WAN stuff. We're finally getting affordable Gig metro
fiber in our area so we'll be rolling that our next year most likely
(hopefully).

Rik

-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of Mark
Lasarko
Sent: Tuesday, July 25, 2006 8:28 PM
To: ciscobee@gmail.com; rguyler@shp-dayton.org
Cc: ccielab@groupstudy.com
Subject: Re: FW: What's your View about these

I really like the 4500's (4506 and 4507R's w/ SupIV's, specifically) with
the NetFlow Services Cards installed. This seems to work well to collapse
access/distribution in high-density areas and allows us to maintain
visibility into the network, while providing for redundancy where required
(even for the NFCards). As a bonus we get to utilize our existing blades
from the last generation 4006's. We use a mix of these in the core along
with some x-crossed 6500's to handle redundancy others loaded with
WS-X6748-GE-TX (1.3MB per port) and other "server-farm" blades for the
heavier traffic... The mixed chassis deployment works for us today, and we
expect to have ample room to scale tomorrow. Beyond that 7600's connect to
our core PE 6500's w/ FWSM's installed as yet another demarcation point
between our core and our MPLS cloud.

I have not seen enough traffic for any 10G, GigEtherchannel is more than
enough for us, and keeps us from putting out the $'s for 10G links.

I don't"do" 3750's any more - melted down too many, gave up on that product
line (especially the Metro). Combine that with the "lacking stacking-SPoF"
technology and I would rather dual-home 3560's - YMMV, As they say; Opinions
are like...

Looking forward I want to replace the 7600 PE devices feeding customer sites
with ME6500's, which *should* be available within the next quarter or so a
respectable price with appropriate access switches. Following this blueprint
we not only maximize performance, but we also retain as much of our existing
investment as possible in re-use of previously procured blades, we are able
to secure/monitor the network and make every port VoIP ready.
My $.02
~M

>>> WorkerBee 07/25/06 2:15 PM >>>
By connecting the servers directly to core/distribution, you need to
administer alot of Layer 2 issues such as HSRP, VTP domain, etc. Hence, make
your Layer 3 core / distribution less scalable.

Hence I will prefer to offload these Layer 2 to dedicated pair of Server
Access switches such as 6500 or 4500 with 10GE uplinks. Hence I run Layer 3
routing between my server farms switches and backbone.

Hence, I have the Layer 3 demarcation point between Server Farms and
Backbone which is useful for creating Security Policy.

On 7/26/06, Guyler, Rik wrote:
> If you connect the servers into a single device of any sort it becomes
> a single point of failure. In our case, our servers are connected to
> two separate switches using a failover NIC team. But, that's somewhat
> beyond the scope of network design as such and should be a standard
> adopted by the server team provided the network design supports such
initiatives.
>
> Rik
>
> -----Original Message-----
> From: James Ventre [mailto:messageboard@ventrefamily.com]
> Sent: Tuesday, July 25, 2006 1:32 PM
> To: Guyler, Rik
> Cc: 'ccielab@groupstudy.com'
> Subject: Re: What's your View about these
>
> I'd consider your 3750 "stack" a single point of failure, if you're
> using the stacking feature. I recently came across a scenario where
> the stacking software between the 3750's wasn't functioning and no
> traffic passed - in or out.
>
> James
>
>
>
> Guyler, Rik wrote:
> > Our server farm connects into the network at the distribution layer,
> > where we typically have better equipment and higher bandwidth
> > backplanes. In our case, we use 4500 switches with Sup4s, which has
> > been an excellent combination supporting over 300+ servers,
> > mainframes,
> minis, AS400s, etc.
> >
> > The 3750 series switches should also be a pretty good solution in
> > this situation but the backplane will be much less than a more
> > robust chassis switch. Be conservative on the number of switches in
> > a single stack since I seem to recall the backplane in a stack runs
> > at
32Gb.
> >
> > I would not directly connect anything directly into the core except
> > for distribution and other core switches. Sometimes the demarcation
> > point is not clearly defined so if your core and distribution layers
> > are collapsed into a single device or layer then really, from an
> > architectural perspective the 3750 stacks would be considered access
> > layer but the reality is that they are still only a single hop away
> > form the core so don't get too wrapped up into the terminology.
> >
> > Rik
>
> ______________________________________________________________________
> _ Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html



This archive was generated by hypermail 2.1.4 : Tue Aug 01 2006 - 07:13:48 ART