From: Moffat, Ed (EMoffat@FSCI.com)
Date: Thu Jul 22 2004 - 13:01:49 GMT-3
This is pretty standard implementation. The reason you see a blocking
between the 3550 switches is because the path to the root from the
middle switch is better through the top switch. From the third switch,
the path to the root is better through the redundant core. If you think
about it in terms of hops (which is an oversimplification of STP), the
middle switch has two hops to the primary core (STP Root) through
switch-1 and three hops to the root through switch-3. Therefore, the
link to switch 3 is inferior and would therefor block.
Regarding whether to route or switch between the core switches depends
on a couple of things:
1) Do the VLANs on the connected switch stacks exist on any other CORE
ports? If the VLANs only belong to the single switch stack, then routing
between the CORE switches is a good way to go as there would then be no
spanning-tree loop. By having CORE1 by the HSRP primary for half the
VLANs on this stack and CORE2 the primary for the other half you help
balance traffic and make better use of your bandwidth.
There is risk in doing this unless you also have solved the following:
2) Is there redundancy within the switch stack? The key question here is
whether a link failure between one of the switches in the stack could
cause the network to partition. In this case, both CORE1 and CORE2 think
they can deliver to the VLANs. However, they each would only be able to
see a portion of the workstations attached to the vlans due to the
partition, so connectivity would fail.
There are a couple of ways to solve this problem with the 3550 (which
only has 2 GB ports). The first is to add a FE or FEC connection
between the switches which would serve as a redundant path. If you do
this however, you will create a spanning-tree loop within the VLAN. STP
will force one of the FEC connections into blocking mode. This solves
the loop issue but if your goal was to get rid of STP blocking this
won't accompish that.
The second method will only work if you are using the Gigastack
connectors. Even then, you run risk of partitioning the network
depending on the type of failure and where it occurs. Using the
Gigastack GBICs, you can create a physical loop using the cascade method
with redundant link. This essentially creates a dual-attached bus that
all switches in the stack connect to. One of the GBIC ports will go into
blocking mode to prevent a loop. (This is not a STP blocking but a PHY
layer block using a Cisco algorithm specific to the Gigastack GBICs).
With this configuration, if one of the Gigastack cables fail, any of the
switches fail, either of the connections to the CORE fail, or one of the
Gigastack GBICs in the middle switch fails you will still have
connectivity to the CORE through the redundant link and only the devices
on the switch with the failure will be affected. However, if the
GigaStack GBIC on either the top or bottom switch (which also provide
the connection to the CORE) fails then you will still partition your
network into unreachable segments.
So, for me, I prefer to keep the links between my CORE switches as L3
only if possible and use FEC connections between my stack switches to
solve for possible link failures that could partition the network.
Here is a good article that raises some of the other issues when load
balancing with HSRP across a L3 core. Pay particular attention to Case
Study 8.
http://www.cisco.com/en/US/customer/tech/tk648/tk362/technologies_tech_n
ote09186a0080094afd.shtml
Hope this makes sense and is helpful.
-Ed Moffat-
CCIE #13196
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of
Ted McDermott
Sent: Thursday, July 22, 2004 6:43 AM
To: ccielab@groupstudy.com
Subject: Spanning Tree Blocking Mechanism on Cat 3550
We have a cluster of (3) 3550 workgroup switches. The first 3550 is
connected to the root bridge, a 6509 with spantree priority of 8192. The
third stacked 3550 is connected to the redundant core 6509 with a
spantree priority of 16384. The two 6509s are connected with a gig
etherchannel set to trunk all VLANs. Each time we have deployed this
configuration, the 2nd (middle) stacked 3550 switch has automatically
blocked one of its gig ports to avoid a spanning tree loop. How does
this happen, and doesnt this solve the STP loop issue, without having
to turn off trunking between the cores? Is it recommended to route
between cores rather than trunk?
Thanks!
This archive was generated by hypermail 2.1.4 : Sun Aug 01 2004 - 10:12:00 GMT-3