Pavel,
Is it not the case that the absence of a real time OS can be modeled
just like any other resource being multiplexed ? So in the end, you will
have bursts that happen when the CPU was not there in time for the first
packet in the train ?
-Carlos
Pavel Bykov @ 16/09/2011 02:28 -0300 dixit:
> In practice it's not as easy or straightforward, but once you know how to
> approach it, you can estimate the burst really quickly.
> With UDP stream that were mentioned, burst depends primarily on the
> consistent performance of the Head End, which usually is not, because
> usually it does not use real-time OS.
> With TCP it's all about window size and statistical aggregation.
> There are well defined general rules within the ITU-T.
> This document I found to be gold:
> http://www.itu.int/rec/T-REC-Y.1541/en
>
>
>
> On Fri, Sep 16, 2011 at 12:31 AM, Joe Astorino <joeastorino1982_at_gmail.com>wrote:
>
>> I am having a real hard time finding good information on this topic for use
>> in the real world. In the lab, we would usually just configure the burst
>> size we are told on a Cat 3560. I have done a LOT of reading on it, and
>> there are a lot of conflicting stories with regards to this.
>>
>> Basically, I am trying to find out how to calculate an optimal burst value
>> on a 3560 QoS policy doing policing. As you probably know the syntax looks
>> like this:
>>
>> police [rate in bits/s] [burst size in bytes]. Remember, this is policing
>> not shaping so the classic shaping formula of tc = bc/cir has no relevance
>> here mainly because the token refresh rate is not based on a static set
>> amount of time. The burst size is actually the size of the token bucket
>> itself in bytes, not a rate of any kind and it is filled as a function of
>> the policed rate and the packet arrival rate. The refill rate of the bucket
>> is not based on a static amount of time like in FRTS for example. It
>> basically says "how long was it since the last packet...multiply that times
>> the policed rate, and divide by 8 to give me bytes". In other words it
>> pro-rates the tokens. Makes sense.
>>
>> Anyways...I have found 2 sort of "methods" to calculating this, but they
>> are
>> so far off from one another I am not quite sure which one to use in the
>> real
>> world.
>>
>> Method 1: The classic CAR formula we see on routers: (rate * 1.5) / 8.
>> This basically gives you 1.5x the policed rate, and converts it to bytes.
>> Makes sense.
>> Method 2: 2x the amount of traffic sent during a single RTT.
>>
>> In my case, I am trying to police a video conferencing endpoint to 3Mbps so
>> by method 1 that gives me a burst size of 562,500 bytes. Using method 2,
>> let's just say I have an average RTT of 100ms. That method would yield a
>> burst size of 75,000 bytes. That is a HUGE difference
>>
>> This came about because the video endpoint was dropping frames. I noticed
>> the policed rate in the policy was 3,000,000 but the burst size was 8000
>> bytes (the lowest possible value). When I changed the burst based on a
>> 100ms RTT and the above formula the problem went away, but now I am having
>> doubts on the proper value to use here.
>>
>> Does anybody have any insight on how to actually calculate this properly?
>>
>> --
>> Regards,
>>
>> Joe Astorino
>> CCIE #24347
>> Blog: http://astorinonetworks.com
>>
>> "He not busy being born is busy dying" - Dylan
>>
>>
>> Blogs and organic groups at http://www.ccie.net
>>
>> _______________________________________________________________________
>> Subscription information may be found at:
>> http://www.groupstudy.com/list/CCIELab.html
>>
>>
>>
>>
>>
>>
>>
>>
>
>
-- Carlos G Mendioroz <tron_at_huapi.ba.ar> LW7 EQI Argentina Blogs and organic groups at http://www.ccie.netReceived on Fri Sep 16 2011 - 07:45:14 ART
This archive was generated by hypermail 2.2.0 : Sat Oct 01 2011 - 07:26:25 ART