I have set up SW1 with 4 routers attached.
      R2  R3   R4
         \    |   /
 R1  -- SW1   -- R5
             |
          10Mb/s
             |
          SW2
             |
            R6
R1 assigned CoS 1
R2 assigned CoS 2
R3 assigned CoS 3
R4 assigned CoS 4
R5 assigned CoS 5
R6 is attached to switch 2 via a dot1q trunk and has a policy-map configured
to match based on CoS and display the bandwidth per CoS.
SW1 is connected to SW2 via a dot1q trunk and I set the speed on the trunk
to 10Mb/s.
mls qos enabled on SW1
mls qos is disabled on SW2
On Router 1-5 I start a continous ping:
ping 10.10.10.6 size 1400 repeat 1000000000 timeout 0
This Generates about 10Mb/s per router toward R6.
I am studying the effects of queuing between switch 1 and 2 which is
interface fa1/0/19.
I have this configuration of SW1
interface fa1/0/19
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
 load-interval 30
 speed 10
 srr-queue bandwidth share 30 10 10 10
 srr-queue bandwidth shape 0 0 0 0
CoS 5 - Queue 1
CoS 1 - Queue 2
CoS 2 - Queue 3
CoS 4 - Queue 4
SW1#sh mls qos interface  fa1/0/19 que
FastEthernet1/0/19
Egress Priority Queue : disabled
Shaped queue weights (absolute) :  0 0 0 0
Shared queue weights  :  30 10 10 10
The port bandwidth limit : 100  (Operational Bandwidth:100.0)
The port is mapped to qset : 1
SW1#sh mls qos map cos-output-q
   Cos-outputq-threshold map:
                     cos: 0    1    2   3    4    5   6    7
                             ------------------------------------
  queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
What I would expect is that queue1 would get 50% of the bandwidth and queue
2-4 would each get 1/6th of the interface bandwidth.
But instead, it appears something different is happening.
R6#sh policy-map interface | i rate
      30 second offered rate 3698000 bps     CoS 5 Q1
      30 second offered rate 1684000 bps     CoS 1 Q2
      30 second offered rate 1057000 bps     CoS 2 Q3
      30 second offered rate 0 bps               CoS 3 Q3 (CoS 3 turned off
in this example)
      30 second offered rate 3049000 bps     CoS 4 Q4
      30 second offered rate 0 bps, drop rate 0 bps  (CoS 0)
Q4 is getting almost as much bandwidth as Q1.  I expect it should be 50 16
16 16.
Q1 - 37%
Q2 - 16%
Q3 - 10%
Q4 - 30%
What is interesting is if I enable priority queue, I get this result:
R6#sh policy-map interface | i rate
      30 second offered rate 6605000 bps  CoS 5 Q1
      30 second offered rate 934000 bps    CoS 1 Q2
      30 second offered rate 477000 bps    CoS 2 Q3
      30 second offered rate 0 bps             CoS 3 (off)
      30 second offered rate 1563000 bps   CoS 4  Q4
      30 second offered rate 0 bps, drop rate 0 bps
Q4 is decreased but is still getting a significant amount of bandwidth.
Q1 - 66%
Q2 - 10%
Q3 - 5%
Q4 - 15%
The point of interest to me is:
1) It appears that Different CoS values are serviced differently in spite of
the share bandwidth command
2) Q4 is special somehow :)
3) Even though the speed is set to a 10 Mb/s link, the queue interface show
command shows the speed at 100.
SW1#sh mls qos interface  fa1/0/19 que
FastEthernet1/0/19
Egress Priority Queue : disabled
Shaped queue weights (absolute) :  0 0 0 0
Shared queue weights  :  30 10 10 10
The port bandwidth limit : 100  (Operational Bandwidth:100.0)
<-------------------------100
The port is mapped to qset : 1
My point of this study is that if I were to receive a question that said
something like:
Configure interface Fa1/0/19 so that Queue1 receives 50% of the interface
bandwidth and Queue 2-4 share the remaining bandwidth equally.
Now...I have no idea what to do....
Thanks in advance for any insight provided,
Chris Grammer
Blogs and organic groups at http://www.ccie.net
Received on Fri Aug 13 2010 - 14:25:38 ART
This archive was generated by hypermail 2.2.0 : Wed Sep 01 2010 - 11:20:52 ART