One reference to it is here:
http://slaptijack.com/networking/inbound-rate-limiting-on-cisco-catalyst-switches/
"BecauseTCP window scaling
<http://slaptijack.com/system-administration/what-is-tcp-window-scaling/>halves
the window size for each dropped packet, it's important to set the burst
size at a level that doesn't impact performance. The rule of thumb is
that the burst size should be double the amount of traffic sent at the
maximum rate at a given round-trip time. In this example, I assumed a
round-trip time of 50 ms which results in a burst size of 100 KB."
I have also seen this in NetApp documentation for Snap Mirror
implementation. There seems to be a relationship between the burst
size. the round trip delay and the TCP windowing function. I have not
seen a detailed explanation of the interaction though.
For lab purposes, I'm not sure what implied delay would be. I suppose
in a vaccum, I'd just go with the default burst size. Does anyone else
have a different/better way to approach this issue?
Chris
On 3/5/2011 11:14 AM, Carlos G Mendioroz wrote:
> I was referring to the Be = Bc or such simple lab rules.
>
> If you have the source of the one you are mentioning, it would be nice
> to know the basis of such rule. I can not figure out why the round trip
> time would have any relation to this, but may be in a good design
> the numbers happen to match ?
>
> I would expect it to be more sensible to the ratio between AR and CIR.
> As I said, this is in the end compensating jitter. Comonsense applies,
> so to say. Also, may be the app is bursty to begin with! Too many ifs.
>
> -Carlos
-- This message was scanned by ESVA and is believed to be clean. Blogs and organic groups at http://www.ccie.netReceived on Sat Mar 05 2011 - 17:05:14 ART
This archive was generated by hypermail 2.2.0 : Fri Apr 01 2011 - 06:35:41 ART