I have not been there ... but a few thoughts popped in my head as I was
reading your email. Perhaps kick these around in a lab but I hope they are
not a wild goose chase ...
1) Use some track object / rtr / eem to check reachability and insert a
secondary route w/ a AD of 89 when the link goes down or when a remote peer
is not found. Not sure how long this would take to fail over ...
NetPro might be your best bet for assistance with a EEM script or some rtr
stuff ... this group is awesome, but I do not see these questions come up
here as much.
Some cool links around EEM:
http://en.wikipedia.org/wiki/Embedded_event_manager#Cisco_EEM_.28Embedded_Event_Manager.29
2) Use BFD for EIGRP ... not sure if this is supported w/ the IOS and
platforms you have. You can run this on only the interfaces you want ...
this can cause a detection of around 50 msec. BFD beats timers any day ...
If you want to change the timers, do also look at stub configs. Changing
the timers might help some, but you also do not want queries and the like to
cause even more CPU hits ... Along with this, use the summary command
everywhere ...
3) Lastly Dale, is there anyway for you to run multiple connections in
configs using bundles? As you know, when a member of the bundle goes, the
fail over will often appear seamless. I do not think this fits in your
environment ... and I am sure I am not telling you anything you do not
already know.
Not sure if this helped ya much ... HTH,
Andrew
On Fri, Sep 18, 2009 at 6:58 PM, <sheherezada_at_gmail.com> wrote:
> Hi,
>
> One of my customers had EIGRP for something about 250 spokes and
> complained of hub receiving goodbyes from tens of random spokes at
> random times. The more spokes they added, the more goodbyes showed
> up. EIGRP was tuned for something slower than the default, in order
> to allow for more hellos, but they wanted more aggressive timings. I
> guess that the "goodbye" behavior was because of EIGRP trying to to
> send hellos to all the spokes at the same time, which made the hub run
> out of interface buffer resources.
>
> What I did was switching to RIP passive mode with IP SLA monitor on
> the spoke side. This was primarily because they also had to jump from
> 250 spokes to 400 spokes on a 3845 hub (not too much aggregate
> traffic, just too many spokes). The big advantage here is that RIP is
> stateless and the hub does not have to send updates any more. For the
> SLA part, you can tune the tracking object to wait for a couple of
> missing packets with the "delay down" statement (you have to keep it
> the same as the flush interval). You might lose some packets when
> switching over, but we feel comfortable with "timers basic 6 18 18
> 20". Did not try something more aggressive, but it might work.
>
> You may want to search through the Networkers presentations on the
> subject - they are pretty useful (in particular and in general).
>
> HTH,
>
> Mihai
>
> CCIE2 #16616
>
>
> On Fri, Sep 18, 2009 at 5:45 AM, Dale Shaw <dale.shaw_at_gmail.com> wrote:
> > Hi all,
> >
> > Does anyone have any production experience with aggressively tuned
> > EIGRP timers in a DMVPN environment?
> >
> > We have a requirement to detect loss of end-to-end connectivity over
> > DMVPN (multipoint GRE protected by IPSec) tunnels and re-route traffic
> > very quickly. This is due to a centralised VoIP implementation. We
> > need to detect loss of connectivity and converge inside 12 seconds.
> >
> > Example using 3 second hellos and 9 second hold-time:
> >
> > interface Tunnel0
> > ip hello-interval eigrp 100 3
> > ip hold-time eigrp 100 9
> >
> > The DMVPN design guide doesn't delve into this very much, and uses a
> > 35 second hold-time timer in all examples. GRE keepalives aren't
> > supported with DMVPN.
> >
> > We have a dual cloud, single hub (per cloud) DMVPN design, and each
> > hub maintains ~35 EIGRP adjacencies. We have a mix of WAN access types
> > and speeds, from 4Mbps EoSHDSL to 200Mbps EoSDH. We provide end-to-end
> > QoS for control plane protocols so forwarding of EIGRP packets from CE
> > to CE should be handled appropriately through the provider MPLS core.
> >
> > I have lab tested the above configs (as well as 2 sec/6 sec) and it
> > all works nicely, but I wasn't able to scale it up to accurately
> > represent the production network.
> >
> > If you have 'been there, done that' and have some war stories, or,
> > better yet, success stories, please let me know.
> >
> > Thanks,
> > Dale
> >
> >
> > Blogs and organic groups at http://www.ccie.net
> >
> > _______________________________________________________________________
> > Subscription information may be found at:
> > http://www.groupstudy.com/list/CCIELab.html
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html
>
>
>
>
>
>
>
>
-- Andrew Lee Lissitz all.from.nj_at_gmail.com Blogs and organic groups at http://www.ccie.netReceived on Fri Sep 18 2009 - 23:59:40 ART
This archive was generated by hypermail 2.2.0 : Sun Oct 04 2009 - 07:42:04 ART