Custom Queueing - ONCE AND FOR ALL

From: Jason Cash (cash2001@swbell.net)
Date: Wed Aug 06 2003 - 10:55:37 GMT-3


I nearing my lab date and a certain few items still nag at me. One is
custom queuing and detemining the byte-count based on bandwidth
requirements. Now before you go, "not another one!", I have read the
archives here, as well as on cisco, IPExpert, etc. and they all seem to
contradict one another.

At times, I have felt comfortable with my level of knowledge, but then in
doing some scenarios, I see that my answers are not matching their's and
don't want to be penalized in the lab. The formula I have come to agree
with is the one listed on Cisco's website @:

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12cgcr/qos_c
/qcpart2/qcconman.htm#xtocid1151317

It states:

1. Divide the percentage of bandwidth you want to allocate to the queue by
the packet size, in bytes.

2. Normalize the numbers by dividing by the lowest number.

3. Normalize the numbers by dividing by the lowest number (round up)

4. Convert the packet number ratio into byte counts by multiplying each
packet count by the corresponding packet size.

5. To determine the bandwidth distribution this ratio represents, first
determine the total number of bytes sent after all three queues are
serviced:

6. Then determine the percentage of the total number of bytes sent from each
queue:

7. If the actual bandwidth is not close enough to the desired bandwidth,
multiply the original ratio by the best value, trying to get as close to
three integer values as possible.

Then there is this listing which only confuses more:

Note: CQ was modified in Cisco IOS Release 12.1. When the queue is depleted
early, or the last packet from the queue does not exactly match the
configured byte count, the amount of deficit is remembered and accounted
for the next time the queue is serviced. Beginning with Cisco IOS Release
12.1, you need not be as accurate in specifying byte counts as you did when
using earlier Cisco IOS releases that did not take deficit into account.

What does that final note mean to me? Does it eliminate the need for step
7? With that in mind, I pose the following example:

DLSW - Telnet = 50% (based on default 1500 pkt. size)
IPX - ICMP = 25%
UDP = 15%
default = 10%

50/1500 = .0334/.0067= 4.98 = 5 *1500 = 7500
25/1500 = .0167/.0067= 2.49 = 3 *1500 = 4500 (both 3 and 4500 are not 1/2 of
above!)
15/1500 = .0100/.0067= 1.49 = 2 *1500 = 1500
10/1500 = .0067/.0067= 1 = 1 *1500 = 1500

7500+4500+1500+1500 = 15000

7500/15000=.5 (50%)
4500/15000=.3 (30%, needed 25%)
1500/15000=.1 (10%, needed 15%)
1500/15000=.1 (50%)

With these numbers, the second queue will get 30% and, more importantly, I
would probably lose points in the lab. So you can see my conundrum.

Furthermore, how is link speed factored into this? If I were to use a
multiplier (which I have with 2, and 3 GETTING SAME RESULTS) eventually,
the total byte count will exceed 64k.

The solution provided the following:

DLSW - Telnet = 5000 byte count
IPX - ICMP = 2500 byte count
UDP = 1500 byte count
default = 1000 byte count

Their math adds up, but doesn't show you how they came to those numbers. I
assume they used pkt. sizes of 1000 (which i thought contrasts what cisco
states)

I thank anyone that has taken the time to read this and hope i can get
clarification before my lab attempt.



This archive was generated by hypermail 2.1.4 : Tue Sep 02 2003 - 18:53:54 GMT-3