Re: Choosing WRED Min and Max Thresholds

From: Tom Kacprzynski <tom.kac_at_gmail.com>
Date: Tue, 24 Jan 2012 22:12:36 -0600

Petr,
Sorry to bring up this old email, but I just came across it and found it
very interesting, especially your calculation of the min and max threshold.

"Pipe-Size = RTT_Seconds*Bandwidth_in_Bps/(
MTU_Bytes*8)
Min_Threshold = 15%*PipeSize
Max_Threshold = 100%*PipeSize
Marking_Probability_Denominator = 1
Exponential_Weighting_Constant = log2(Bandwidth_Bps/(10*MTU_Bytes*8))"

So If I understand this correctly, you are using the Bandwidth Delay
Product to calculate the class-map's queue-limit in terms of packets?
Is the 15% an arbitrary number you picked for Min_threshold?
Additionally, are you suggesting that the class-maps should have the
queue-limit set to the bandwidth delay product converted to packet's size?

Thank you,

Tom Kacprzynski

On Mon, Nov 1, 2010 at 10:44 PM, Petr Lapukhov
<petr_at_internetworkexpert.com>wrote:

> Gregory,
>
> You are confusing different concepts here. Firstly, for the congestion.
> Congestion is defined as a condition where "input request rate" exceeds the
> system's "service rate". In our case, "service rate" is defined as DS3
> serialization rate (accounting for framing, hardware driver processing
> time,
> etc). The "request rate" is the amount of bits pers second switched egress
> toward the DS3 circuit. As soon as "request rate" > "service rate", the
> egress queue starts growing *unrestrictedly*. In other words, congestion
> could be defined as condition where "service queue" is non-empty, signaling
> the fact that output interface was not able to serialize a packet without
> delaying it.
>
> In fact, if you define interface "utilization" as the percentage of a time
> interval that the egress interface is busy sending packets (from 0 to
> 100%),
> then the queue depth could be approximated to some extent as:
>
> Depth = Utilization/(1-Utilization) [based on general mass-service system
> theory]
>
> Keep in mind, though, that this formula quickly becomes inaccurate as
> utilization approaches 100%: the "near full link utilization" queueing
> theory is not very well developed still. However, you can quickly see that
> with 50% interface utilization, the *average* queue depth will be:
>
> Depth_50% = 0.5/(1-0.5)=0.5/0.5=1 packet.
>
> That is, if during any random time interval, interface spends 50% of time
> serializing packet with the remaining 50% being idle, then queue depth will
> most likely be 1 packet. Notice that the queue depth grows rapidly as
> utilization approaches 90%, illustrating the fact that keeping interface
> utilized over 50% may have significant performance drawbacks.
>
> Now for RED/WRED. At this point, you should clearly understand that
> interface queue, whatever it is, simply holds the *excessive* packets that
> the interface was not able to schedule due to overutilization. These
> packets
> will be serviced at interface physical rate, as soons as the interface is
> available for sending packets. WRED is just an active *queue management*
> technique that accounts for the TCP behavior. By dropping packets early, it
> slows down some endpoints and alleviates excessive congestion on the
> interface that may overwise result from TCP flow synchronization. The term
> "congestion avoidance" is misleading, as it should more correcly state
> "congestion alleviation": WRED works only when interface is congested, i.e.
> has non-zero output queue. As for optimum values for RED/WRED parameters
> those are defined empirically, and I attached some "starting" values for
> your consideration.
>
> RED parameters (assuming the packets are MTU sized). Those are just
> starting
> values, adjust them empirically for achieving maximum performance in your
> particular scenario:
> ---
> Pipe-Size = RTT_Seconds*Bandwidth_in_Bps/(MTU_Bytes*8)
> Min_Threshold = 15%*PipeSize
> Max_Threshold = 100%*PipeSize
> Marking_Probability_Denominator = 1
> Exponential_Weighting_Constant = log2(Bandwidth_Bps/(10*MTU_Bytes*8))
>
> As you can see, those are based on some TCP performance metrics, notably
> the
> "pipe size".
>
> HTH
> --
> Petr Lapukhov, petr_at_INE.com
> CCIE #16379 (R&S/Security/SP/Voice)
> CCDE #20100007
>
> Internetwork Expert, Inc.
> http://www.INE.com <http://www.ine.com/>
> Toll Free: 877-224-8987
> Outside US: 775-826-4344
>
> 2010/11/1 Gregory Gombas <ggombas_at_gmail.com>
>
> > Thanks for responding Matt,
> >
> > You're explanation is consistent with what I recall about the software
> > queueing mechanism - which is that packets only end up in the software
> > queue
> > when there is congestion on the link.
> >
> > But here comes the paradox:
> >
> > If WRED is a congestion avoidance mechanism, how can it avoid congestion
> > when it only acts on packets in the software queue (which only appear
> > during
> > times of congestion)?
> >
> > On Mon, Nov 1, 2010 at 10:48 AM, Matt Eason <matt.d.eason_at_gmail.com>
> > wrote:
> >
> > > Hi Greg,
> > >
> > > I think the important point in this discussion is that the wred values
> > only
> > > apply to packets that are sitting in the software queue waiting to be
> > > processed. For example, if your link is running @ 10% of its rated
> speed
> > > i.e 4.4mbps on a DS3 then the packets are going to be sent out on the
> > wire
> > > without any delay and are not subject to software queueing as the
> circuit
> > > speed is fast enough to serialise the packets straight onto the wire.
> > >
> > > On the other hand if you were sending enough data to clog your software
> > > queue wred would kick in (depending on the queue depth). This is where
> > the
> > > min & max thresholds come into play.
> > >
> > > Most networks I have worked on run with the default values however
> > certain
> > > scenarios may benefit from custom values, it just depends :)
> > > Cheers,
> > >
> > > Matt
> > >
> > > On Mon, Nov 1, 2010 at 9:34 PM, Gregory Gombas <ggombas_at_gmail.com>
> > wrote:
> > >
> > >> Hi Gang,
> > >>
> > >> I am trying to choose the best WRED Min and Max Thresholds for a DS3.
> > >>
> > >> So far the best guide I could find was here:
> > >>
> > >> http://www.cisco.com/en/US/docs/ios/11_2/feature/guide/wred_gs.html
> > >>
> > >> However, neither the Cisco default values nor the recommended values
> on
> > >> that
> > >> page make any sense.
> > >>
> > >> For example, the default value for the WRED maximum threshold on a DS3
> > >> interface is 40 packets.
> > >>
> > >> Assuming a maximum packet size of 1500 bytes that would mean the most
> > >> throughput you would get on a DS3 interface with WRED would be 40
> > packets
> > >> *
> > >> (1500 bytes per packet) * (8 bits per byte) = 480,000 bps or 480 kbps.
> > >>
> > >> Which means you would be at full drop at only 480 kbps?!?
> > >>
> > >> Even if you took the recommended setting of 367 packets for maximum
> > >> threshold:
> > >>
> > >> 367 packets * (1500 bytes per packet) * (8 bits per byte) = 4,404,000
> > bps
> > >> or
> > >> 4,404 kbps (1/10 the bandwidth of a DS3)
> > >>
> > >> I know the goal of WRED is to drop packets before you reach
> congestion,
> > >> but
> > >> dropping all packets when reaching 1/10th of the DS3 bandwidth seems a
> > >> little ridiculous.
> > >>
> > >> Am I missing something here?
> > >>
> > >> Thanks,
> > >> Gregory Gombas
> > >> CCIE #19649
> > >>
> > >>
> > >> Blogs and organic groups at http://www.ccie.net
> > >>
> > >>
> _______________________________________________________________________
> > >> Subscription information may be found at:
> > >> http://www.groupstudy.com/list/CCIELab.html
> >
> >
> > Blogs and organic groups at http://www.ccie.net
> >
> > _______________________________________________________________________
> > Subscription information may be found at:
> > http://www.groupstudy.com/list/CCIELab.html
>
>
> Blogs and organic groups at http://www.ccie.net
>
> _______________________________________________________________________
> Subscription information may be found at:
> http://www.groupstudy.com/list/CCIELab.html

Blogs and organic groups at http://www.ccie.net
Received on Tue Jan 24 2012 - 22:12:36 ART

This archive was generated by hypermail 2.2.0 : Thu Feb 02 2012 - 11:52:51 ART