From: Howard C. Berkowitz (hcb@gettcomm.com)
Date: Fri Jul 02 2004 - 20:21:49 GMT-3
At 6:31 PM -0400 7/2/04, James wrote:
>Hi Howard,
>
>By "backbones" I meant transit ISP backbones. Not enterprises..
>
>>
>> In the real world, I find confederations most useful to build a
>> complex backbone-of-backbones for enterprises. They let you split
>> IGP domains, and also let you implement complex enterprise (or
>> extranet) interdomain policies.
>
>I would agree to some extent in this.
>
>>
>> Splitting IGP domains can be useful in several ways. The most obvious
>> is scalability by restricting the number of routes and routers in
>> each domian.
>
>IGP domains are usually not a scalability issue on most transit backbones
I agree, especially when they are well-tuned ISIS (although Cisco is
improving OSPF scalability). I was speaking of enterprises, where
there may be lower-powered routers and also much more policy,
filtering, encapsulation, etc. Let me cite two examples of
intercontinental networks that I designed, with different backbone
assumptions. These were a few years back, so take the speeds with
many grains of salt.
One enterprise was a manufacturing company, with very little
intercommunications between the continental regions, but much traffic
between regions and the corporate data center in the northeast US. I
used a collapsed backbone at headquarters, with two main routers on a
FDDI ring. Each of the routers had one of two links to a region, or
to the pseudo-region of the corporate data center.
The other was an international transportation company, with extensive
interactions among the regions, as well as physically diverse data
centers that could back up one another. Complex policies had to be
written to be sure that failover went to the nearest center that had
been capacity-engineered to handle the additional load, as opposed to
the center to which there was a low metric or short AS path. There
were other policy considerations imposed by national and regional
transborder data flow privacy requirements. In this case, each
region, which varied from a continent to a part of a continent [1]
was an IGP domain and a confederation AS. The corporate backbone ran
BGP among the confederations as well as to the various multinational
ISP POPs.
[1] A region might be defined simply in terms of traffic patterns, or in
bandwidth costs, or sometimes for policy reasons (e.g., western Europe,
at the time, had the most stringent transborder data privacy rules). We
didn't want to send transit traffic through there if the endpoints didn't
have the same privacy and security rules.
>(e.g. AS209, AS7018, AS1668) as they carry only infrastructure next-hops
>required by BGP to make routing decisions, in IGP. Almost everything else is
>carried in IBGP with appropriate community (i.e. no-export or their
>own version
>of no-export) to prevent leak to outside. Most of their scalability
>issues come
>from edge router peerings into the core, which is alleaveated easily by
>route-reflectors going down from core->edge.
Hierarchical route reflection, again with due regard for the
persistent oscillation problem, can be very useful here.
>
>AS209 and 1668 use exclusively ISIS on their entire international network,
>carrying only TLVs necessary to carry needed next-hops and perform BGP
>traffic engineering by modifying IGP cost of a peering circuit.
>
>> Next, sometimes a topology that would be quite awkward
>> with OSPF or ISIS backbone rules becomes much easier when you have a
>> limited number of interdomain connections, which don't need to follow
>> strict hierarchy.
>
>It's not always about the hierarchy these days with these backbones. They want
>something simple for overall peering topology and be able to TE the peering
>routes by simple modification of IGP cost in multi-location peering sessions.
>A flat-IGP network is the best for this type of scenario.
Again, you are talking transit backbone while I've been talking
enterprise backbone. Yes, flat IGP is reasonable there, with the
caveat that you may want to have local optimization in large POPs.
>
>> If you are a multinational enterprise, the
>> bandwidths available in continents or regions often tend to differ.
>
>That's totally dependent of your IGP engineering.
Not necessarily an engineering choice, but sometimes an economic one.
Depending on the region, some bandwidths may be prohibitively
expensive. While I will admit it was a few years back, it was
cheaper to interconnect Phillippine cities a few hundred miles apart
with a backhaul and interconnect on the West Coast of the US.
I've designed metro-area networks in Scandinavia, where we could
easily run metro gigabit or faster Ethernet, but certainly wouldn't
do that outside the local area. Even for these high-performance
networks, we still had transcontinental links limited to OC-3 or less.
>
>> Routing performance tends to be most predictable when there isn't a
>> huge range of bandwidths in the same IGP domain.
>> these are apt to be traffic-engineered, using at least RSVP and
>> probably MPLS. ISP backbones tend to be fairly flat. POPs feed into
>> the edges of these backbones, and the POPs usually involve route
>> reflectors, often fully meshed within the cluster to avoid some of
>> the oscillation conditions described in RFC 3345. I suspect the CCIE
>> lab does not consider 3345 issues.
>
>I fully agree with this.
>
>
> > >--
>> >James Jun TowardEX
>> >Technologies, Inc.
>> >Technical Lead Network Design, Consulting, IT
>> >Outsourcing
>> >james@towardex.com Boston-based Colocation &
>> >Bandwidth Services
> > >cell: 1(978)-394-2867 web: http://www.towardex.com , noc:
> > >www.twdx.net
This archive was generated by hypermail 2.1.4 : Sun Aug 01 2004 - 10:11:46 GMT-3