I was just actually drawing this up for a similar situation.
You're correct, traffic would traverse that link. Diversifying the
vlans across multiple cores just makes both cores work. You could in
essence put all SVIs, HSRP and STP roots on one box, and leave the
other for failover. You'd beat up the backplane on one switch, the
other would be sleeping.
Now in the next example (10G to the core) one port will be active and
one will be blocking per VLAN. *Provided you're not using VSS* So
Vlan1 would traverse link 1 (Stp would block link 2) and Vlan 2 would
traverse line 2 (stp would block link 1).
If you're worried about cranking the 10G at the core, double-em up
(port channel) and you'll have 20Gs between the two cores.
Does that make sense, or am I off the mark?
HTH,
JB
On Mon, May 2, 2011 at 12:27 PM, Ahmed <ahmedsalim_at_gmail.com> wrote:
> In a collapsed core, what be the most ideal place for servers.
>
> 1. Cost wise
> 2. Design wise
>
> I'm assuming odd SVI's on Core1 and even on Core2.
>
> The problem:
>
> If servers connect directly to core, assuming default gatway on Core1, the
> even vlans( whose HSRP primgary group and hence STP Primary would be core 2)
> traffic traverse the CORE-CORE link to go to the servers ....
>
> If servers connected through a switch, 10 gig uplink to each core, how would
> each (odd and even ) vlan traffic flow to the servers ...
>
> Ahmed
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html
Blogs and organic groups at http://www.ccie.net
Received on Mon May 02 2011 - 21:04:30 ART
This archive was generated by hypermail 2.2.0 : Wed Jun 01 2011 - 09:01:11 ART