Re: 3560 - Why is output queue 1 always limited to 400k with

From: Hobbs (deadheadblues@gmail.com)
Date: Tue Sep 30 2008 - 16:47:39 ART


Thanks Petr. Here is my SW1 config. Below I have 2 examples (limit 20 and
40). Also, I think PQ may not be the issue, but rather queue 1 itself. When
I disabled PQ, I was still limited to 400K...I'm running Version
12.2(25)SEE4...I wonder if this code has an internal limit on the queue.
When I change R5 to be cos 3 (and this queue 3)...it can send faster (I
notice the latency on pings is a lot lower too).

1) bw limit of 20, PQ enabled
2) bw limit of 40, PQ enabled

1) bw limit of 20, PQ enabled

SW1#show run
Building configuration...

Current configuration : 1658 bytes
!
version 12.2
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname SW1
!
no aaa new-model
ip subnet-zero
no ip domain-lookup
!
mls qos
!
!
no file verify auto
spanning-tree mode pvst
spanning-tree extend system-id
!
vlan internal allocation policy ascending
!
!
interface FastEthernet0/1
 mls qos cos 1
 mls qos trust cos
!
interface FastEthernet0/2
!
interface FastEthernet0/3
 mls qos cos 3
 mls qos trust cos
!
interface FastEthernet0/4
!
interface FastEthernet0/5
 mls qos cos 5
 mls qos trust cos
!
interface FastEthernet0/6
!
interface FastEthernet0/7
!
interface FastEthernet0/8
!
interface FastEthernet0/9
!
interface FastEthernet0/10
!
interface FastEthernet0/11
!
interface FastEthernet0/12
!
interface FastEthernet0/13
 load-interval 30
 speed 10
 srr-queue bandwidth share 33 33 33 1
 srr-queue bandwidth shape 50 0 80 0
 srr-queue bandwidth limit 20
 priority-queue out
!
interface FastEthernet0/14
 shutdown
!
interface FastEthernet0/15
 shutdown
!
interface FastEthernet0/16
 shutdown
!
interface FastEthernet0/17
 shutdown
!
interface FastEthernet0/18
 shutdown
!
interface FastEthernet0/19
 shutdown
!
interface FastEthernet0/20
 shutdown
!
interface FastEthernet0/21
 shutdown
!
interface FastEthernet0/22
!
interface FastEthernet0/23
!
interface FastEthernet0/24
 shutdown
!
interface GigabitEthernet0/1
!
interface GigabitEthernet0/2
!
interface Vlan1
 no ip address
 shutdown
!
ip classless
ip http server
ip http secure-server
!
!
!
control-plane
!
!
line con 0
 exec-timeout 0 0
 logging synchronous
line vty 0 4
 no login
line vty 5 15
 no login
!
end

Here is R2 the meter, PREC1 will eventually approach almost 1M: PREC3 is
125K, because I shaped it with value 80. No matter what bw limit I use, R5
always is topped at 400K (moves between 396-400).

R2#show policy-map interface
 Ethernet0/0

  Service-policy input: TRACK

    Class-map: PREC1 (match-all)
      53704 packets, 81307856 bytes
      5 minute offered rate 919000 bps
      Match: ip precedence 1

    Class-map: PREC3 (match-all)
      6925 packets, 10484450 bytes
      5 minute offered rate 124000 bps
      Match: ip precedence 3

    Class-map: PREC5 (match-all)
      22281 packets, 33733434 bytes
      5 minute offered rate 398000 bps
      Match: ip precedence 5

    Class-map: class-default (match-any)
      139 packets, 85902 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any

2) bw limit of 40, PQ enabled

SW1(config)#int f0/13
SW1(config-if)#srr-queue bandwidth limit 40

Clear stats on R2, and then after a new set of pings, Q1 is chewing up the
rest of the bw:

R2#show policy-map interface
 Ethernet0/0

  Service-policy input: TRACK

    Class-map: PREC1 (match-all)
      75556 packets, 114391784 bytes
      5 minute offered rate 1050000 bps
      Match: ip precedence 1

    Class-map: PREC3 (match-all)
      8864 packets, 13420096 bytes
      5 minute offered rate 125000 bps
      Match: ip precedence 3

    Class-map: PREC5 (match-all)
      28540 packets, 43209560 bytes
      5 minute offered rate 399000 bps
      Match: ip precedence 5

    Class-map: class-default (match-any)
      150 packets, 92700 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any

I am using all the deault mls qos settings....could it be my buffers in
queue 1 holding me back? I noticed when I took of PQ, it was still at
400K...Here is my map btw

SW1#show mls qos maps cos-output-q
   Cos-outputq-threshold map:
              cos: 0 1 2 3 4 5 6 7
              ------------------------------------
  queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1

SW1#

t

On Tue, Sep 30, 2008 at 12:46 PM, Petr Lapukhov <petr@internetworkexpert.com
> wrote:

> Could you post your full configuration? I just ran a quick simulation and
> it works just as it usually worked for me: PQ steals all the bandwidth. E.g.
> between Prec 0 and Prec 5 traffic:
> interface FastEthernet0/1
> speed 10
> srr-queue bandwidth limit 20
> priority-queue out
>
> Rack1R1#show policy-map interface fastEthernet 0/0
> FastEthernet0/0
>
> Service-policy input: METER
>
> Class-map: PREC0 (match-all)
> 23 packets, 34822 bytes
> 30 second offered rate 1000 bps
> Match: ip precedence 0
>
> Class-map: PREC5 (match-all)
> 22739 packets, 34426846 bytes
> 30 second offered rate 1956000 bps
> Match: ip precedence 5
>
> If you change the speed limit to 30% the situation becomes as following:
>
> Rack1R1#show policy-map interface fastEthernet 0/0
> FastEthernet0/0
>
> Service-policy input: METER
>
> Class-map: PREC0 (match-all)
> 136 packets, 205904 bytes
> 30 second offered rate 1000 bps
> Match: ip precedence 0
>
> Class-map: PREC5 (match-all)
> 96488 packets, 146082832 bytes
> 30 second offered rate 2716000 bps
> Match: ip precedence 5
>
> That is, the PQ claims all bandwidth again.
>
> HTH
> --
> Petr Lapukhov, CCIE #16379 (R&S/Security/SP/Voice)
> petr@internetworkexpert.com
>
> Internetwork Expert, Inc.
> http://www.InternetworkExpert.com
> Toll Free: 877-224-8987
> Outside US: 775-826-4344
>
> 2008/9/30 Hobbs <deadheadblues@gmail.com>
>
>> Hello,
>>
>> I know this is lengthy but I am completely stumped. I have been reading
>> and
>> labbing some of the examples of 3560/3550 qos on IE's blog and I have run
>> into some interesting issues in my lab.
>>
>> R1----\
>> R3----[SW1]----[SW2]-----R2
>> R5----/
>>
>> All ports set to 10M. R1=cos1, R3=cos3, R5=cos5, default cos-output-q map.
>>
>> R2 has a policy with classes that match precedence (1,3,5) applied to its
>> interface to meter the rate of each class.
>> On each router I run this command: "ping 192.168.0.2 rep 1000000 size
>> 1500"
>> to generate a bunch of traffic. This works great.
>>
>> Whenever I have priority-queue out on f0/13, cos 5 is always limited to
>> 400,000K no matter if I have a "srr-queue bandwidth limit" or not. In
>> addition, the other queues eat up the rest of the bandwidth (unless I
>> shape
>> them of course). In other words, priority queuing is NOT starving the
>> other
>> queues.
>>
>> Any other settings I need to check? From what I understand share/shape
>> parameters on queue 1 don't matter when priority queue is on, and in fact
>> they don' t affect it - 400K is always the limit!
>>
>> thanks,
>>
>> the blog is here for reference:
>>
>> http://blog.internetworkexpert.com/2008/06/26/quick-notes-on-the-3560-egress-queuing/
>>
>>
>> Blogs and organic groups at http://www.ccie.net
>>
>> _______________________________________________________________________
>> Subscription information may be found at:
>> http://www.groupstudy.com/list/CCIELab.html

Blogs and organic groups at http://www.ccie.net



This archive was generated by hypermail 2.1.4 : Sat Oct 04 2008 - 09:26:20 ART