From: Hobbs (deadheadblues@gmail.com)
Date: Wed Oct 01 2008 - 01:39:21 ART
That was it! I had mls qos enabled on SW2. I disabled it and now traffic is
flowing smoothly. Still can't get the PQ to starve the other queues but I'll
tackle that tomorrow.
thanks for the tips! :)
On Tue, Sep 30, 2008 at 10:25 PM, Pavel Bykov <slidersv@gmail.com> wrote:
> Yeah check that.
> Because if default is COS5 -> queue 1, and Default is SRR = 1/25, and
> priority queue is disabled - than that could be your problem.
> Problems are the best way to learn - you'll remember this for the next 10
> years :)
>
> Regards,
> Pavel
>
>
>
> On Wed, Oct 1, 2008 at 5:34 AM, Hobbs <deadheadblues@gmail.com> wrote:
>
>> Thanks Pavel. I will check other devices, they are pretty much default.
>> but check this out: If I change the cos value of the port that R5 is
>> connected to (to cos3 or 4,etc)...then R5 can burst above 400k! its weird.
>> cos5 is the only cos that is mapped to queue 1 by default, so I think queue
>> 1 is screwy. I think I'm gonna reverse things and enable srr-queue on SW2
>> and ping the other way. SW2 is running newer code...i think 12.2(40)...
>>
>> I tell you I have learned so much about 3560 qos, just because of this one
>> problem. I dont know what i am gonna do if I resolve it :)
>>
>> hmmm.....I wonder if SW2 is doing any limiting on its input queue for cos
>> 5...
>>
>>
>> On Tue, Sep 30, 2008 at 9:23 PM, Pavel Bykov <slidersv@gmail.com> wrote:
>>
>>> Sorry, i got carried away with the "show policy-map"... Just couple days
>>> ago I was complaining that there is no way to see what packet fell into
>>> which queue on the output of a switch and then I wrote this.
>>> With the limiting of expedite queue... that I was surprised about. Should
>>> have read the documentation - will definitely do so.
>>>
>>> Anyway, since 400K seems to be still awfully suspicious, since default
>>> shape of queue 1 is 1/25, and QoS is end to end, I would check all PHB's on
>>> other equipment (e.g. traffic generator's output interface - input interface
>>> - output interface - input interface .... - all the way to the destination)
>>>
>>> Regards,
>>>
>>>
>>>
>>> On Wed, Oct 1, 2008 at 3:27 AM, Hobbs <deadheadblues@gmail.com> wrote:
>>>
>>>> Pavel, Thank you for reply:
>>>>
>>>> 1) According to DocCD, the PQ overrides the shaping parameter. And if
>>>> you see in Petr's example, he has 50 on the queue 1 also. I am following
>>>> Petr's example. Here is from the DocCD:
>>>>
>>>> "If the egress expedite queue is enabled, it overrides the SRR shaped
>>>> and shared weights for queue 1."
>>>>
>>>>
>>>> http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_44_se/command/reference/cli1.html#wp3281502
>>>>
>>>> In addition when I use this command "srr-queue bandwidth shape 0 0 0
>>>> 0" bw is still limited to 400K. That's why I thought something else is
>>>> holding up queue 1. But I showed you my config already of SW1 - all global
>>>> mls qos settings are at defaults (maps, buffers, etc)
>>>>
>>>> 2) I let the ping run well more than 5 minutes for accurate stats. 30
>>>> second intervals (which I have done since) makes no difference.
>>>>
>>>> 3) There is no policy-map on SW f0/13 - it's a switch with srr-queue
>>>> enabled.
>>>>
>>>> I'm just looking for some suggestions, this is why I posted the
>>>> question. From what I understand PQ overrides shaping parameter. Even if it
>>>> wasn't and I put shaping at 0 to disable it, this is the latest example:
>>>>
>>>> SW1:
>>>> interface FastEthernet0/13
>>>> load-interval 30
>>>> speed 10
>>>> srr-queue bandwidth share 33 33 33 1
>>>> srr-queue bandwidth shape 0 0 0 0
>>>> priority-queue out
>>>>
>>>> R2:
>>>> R2#show policy-map interface
>>>> Ethernet0/0
>>>>
>>>> Service-policy input: TRACK
>>>>
>>>> Class-map: PREC1 (match-all)
>>>> 67878 packets, 102767292 bytes
>>>> 30 second offered rate 1070000 bps
>>>> Match: ip precedence 1
>>>>
>>>> Class-map: PREC3 (match-all)
>>>> 67800 packets, 102649200 bytes
>>>> 30 second offered rate 1070000 bps
>>>> Match: ip precedence 3
>>>>
>>>> Class-map: PREC5 (match-all)
>>>> 25238 packets, 38210332 bytes
>>>> 30 second offered rate 398000 bps
>>>> Match: ip precedence 5
>>>>
>>>> Class-map: class-default (match-any)
>>>> 184 packets, 113712 bytes
>>>> 30 second offered rate 0 bps, drop rate 0 bps
>>>> Match: any
>>>> R2#
>>>>
>>>> I'll be the first to admit that I am doing something inconsistent,
>>>> believe me, I want to find the solution. Right now I don't see what I am
>>>> doing wrong.
>>>>
>>>> thanks
>>>>
>>>>
>>>> On Tue, Sep 30, 2008 at 6:55 PM, Pavel Bykov <slidersv@gmail.com>wrote:
>>>>
>>>>> Hobbs.... I see a bit of missing consistency in your question.
>>>>> 1. You set limits to 20 and 40 percent of bandwidth, which is 2M and
>>>>> 4M, yet the shaping rate of 1st queue is 1/50, which is 200K. May I remind
>>>>> you that the DEFAULT settings as you mentioned are "srr-queue bandwidth
>>>>> shape 25 0 0 0", meaning that PQ (or Queue 1-1) is automatically shaped to
>>>>> 400K by default. So You could alter bandwidth limit all you want but the
>>>>> shaper is the limiting factor here.
>>>>>
>>>>> 2. Petr sent much more data - 143M, and his counters are 30s. Your
>>>>> counters are 5 minutes. Please reduce counter timing (load-interval) on the
>>>>> interface
>>>>>
>>>>> 3. You haven't posted show policy map of interface Fa 0/13 on output,
>>>>> where the queues are located, but only on other side - the router. We should
>>>>> look only at the switch at this point.
>>>>>
>>>>> So instead of overloading PREC5, try overloading PREC1 (queue 2) since
>>>>> this is where bandwidth limit will be felt. At this point you would need to
>>>>> change PQ shaping limit in order to see any difference.
>>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> --
>>>>> Pavel Bykov
>>>>> -------------------------------------------------
>>>>> Stop the braindumps!
>>>>> http://www.stopbraindumps.com/
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Sep 30, 2008 at 9:47 PM, Hobbs <deadheadblues@gmail.com>wrote:
>>>>>
>>>>>> Thanks Petr. Here is my SW1 config. Below I have 2 examples (limit 20
>>>>>> and
>>>>>> 40). Also, I think PQ may not be the issue, but rather queue 1 itself.
>>>>>> When
>>>>>> I disabled PQ, I was still limited to 400K...I'm running Version
>>>>>> 12.2(25)SEE4...I wonder if this code has an internal limit on the
>>>>>> queue.
>>>>>> When I change R5 to be cos 3 (and this queue 3)...it can send faster
>>>>>> (I
>>>>>> notice the latency on pings is a lot lower too).
>>>>>>
>>>>>> 1) bw limit of 20, PQ enabled
>>>>>> 2) bw limit of 40, PQ enabled
>>>>>>
>>>>>> 1) bw limit of 20, PQ enabled
>>>>>>
>>>>>> SW1#show run
>>>>>> Building configuration...
>>>>>>
>>>>>> Current configuration : 1658 bytes
>>>>>> !
>>>>>> version 12.2
>>>>>> no service pad
>>>>>> service timestamps debug uptime
>>>>>> service timestamps log uptime
>>>>>> no service password-encryption
>>>>>> !
>>>>>> hostname SW1
>>>>>> !
>>>>>> no aaa new-model
>>>>>> ip subnet-zero
>>>>>> no ip domain-lookup
>>>>>> !
>>>>>> mls qos
>>>>>> !
>>>>>> !
>>>>>> no file verify auto
>>>>>> spanning-tree mode pvst
>>>>>> spanning-tree extend system-id
>>>>>> !
>>>>>> vlan internal allocation policy ascending
>>>>>> !
>>>>>> !
>>>>>> interface FastEthernet0/1
>>>>>> mls qos cos 1
>>>>>> mls qos trust cos
>>>>>> !
>>>>>> interface FastEthernet0/2
>>>>>> !
>>>>>> interface FastEthernet0/3
>>>>>> mls qos cos 3
>>>>>> mls qos trust cos
>>>>>> !
>>>>>> interface FastEthernet0/4
>>>>>> !
>>>>>> interface FastEthernet0/5
>>>>>> mls qos cos 5
>>>>>> mls qos trust cos
>>>>>> !
>>>>>> interface FastEthernet0/6
>>>>>> !
>>>>>> interface FastEthernet0/7
>>>>>> !
>>>>>> interface FastEthernet0/8
>>>>>> !
>>>>>> interface FastEthernet0/9
>>>>>> !
>>>>>> interface FastEthernet0/10
>>>>>> !
>>>>>> interface FastEthernet0/11
>>>>>> !
>>>>>> interface FastEthernet0/12
>>>>>> !
>>>>>> interface FastEthernet0/13
>>>>>> load-interval 30
>>>>>> speed 10
>>>>>> srr-queue bandwidth share 33 33 33 1
>>>>>> srr-queue bandwidth shape 50 0 80 0
>>>>>> srr-queue bandwidth limit 20
>>>>>> priority-queue out
>>>>>> !
>>>>>> interface FastEthernet0/14
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/15
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/16
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/17
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/18
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/19
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/20
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/21
>>>>>> shutdown
>>>>>> !
>>>>>> interface FastEthernet0/22
>>>>>> !
>>>>>> interface FastEthernet0/23
>>>>>> !
>>>>>> interface FastEthernet0/24
>>>>>> shutdown
>>>>>> !
>>>>>> interface GigabitEthernet0/1
>>>>>> !
>>>>>> interface GigabitEthernet0/2
>>>>>> !
>>>>>> interface Vlan1
>>>>>> no ip address
>>>>>> shutdown
>>>>>> !
>>>>>> ip classless
>>>>>> ip http server
>>>>>> ip http secure-server
>>>>>> !
>>>>>> !
>>>>>> !
>>>>>> control-plane
>>>>>> !
>>>>>> !
>>>>>> line con 0
>>>>>> exec-timeout 0 0
>>>>>> logging synchronous
>>>>>> line vty 0 4
>>>>>> no login
>>>>>> line vty 5 15
>>>>>> no login
>>>>>> !
>>>>>> end
>>>>>>
>>>>>> Here is R2 the meter, PREC1 will eventually approach almost 1M: PREC3
>>>>>> is
>>>>>> 125K, because I shaped it with value 80. No matter what bw limit I
>>>>>> use, R5
>>>>>> always is topped at 400K (moves between 396-400).
>>>>>>
>>>>>> R2#show policy-map interface
>>>>>> Ethernet0/0
>>>>>>
>>>>>> Service-policy input: TRACK
>>>>>>
>>>>>> Class-map: PREC1 (match-all)
>>>>>> 53704 packets, 81307856 bytes
>>>>>> 5 minute offered rate 919000 bps
>>>>>> Match: ip precedence 1
>>>>>>
>>>>>> Class-map: PREC3 (match-all)
>>>>>> 6925 packets, 10484450 bytes
>>>>>> 5 minute offered rate 124000 bps
>>>>>> Match: ip precedence 3
>>>>>>
>>>>>> Class-map: PREC5 (match-all)
>>>>>> 22281 packets, 33733434 bytes
>>>>>> 5 minute offered rate 398000 bps
>>>>>> Match: ip precedence 5
>>>>>>
>>>>>> Class-map: class-default (match-any)
>>>>>> 139 packets, 85902 bytes
>>>>>> 5 minute offered rate 0 bps, drop rate 0 bps
>>>>>> Match: any
>>>>>>
>>>>>> 2) bw limit of 40, PQ enabled
>>>>>>
>>>>>> SW1(config)#int f0/13
>>>>>> SW1(config-if)#srr-queue bandwidth limit 40
>>>>>>
>>>>>> Clear stats on R2, and then after a new set of pings, Q1 is chewing up
>>>>>> the
>>>>>> rest of the bw:
>>>>>>
>>>>>> R2#show policy-map interface
>>>>>> Ethernet0/0
>>>>>>
>>>>>> Service-policy input: TRACK
>>>>>>
>>>>>> Class-map: PREC1 (match-all)
>>>>>> 75556 packets, 114391784 bytes
>>>>>> 5 minute offered rate 1050000 bps
>>>>>> Match: ip precedence 1
>>>>>>
>>>>>> Class-map: PREC3 (match-all)
>>>>>> 8864 packets, 13420096 bytes
>>>>>> 5 minute offered rate 125000 bps
>>>>>> Match: ip precedence 3
>>>>>>
>>>>>> Class-map: PREC5 (match-all)
>>>>>> 28540 packets, 43209560 bytes
>>>>>> 5 minute offered rate 399000 bps
>>>>>> Match: ip precedence 5
>>>>>>
>>>>>> Class-map: class-default (match-any)
>>>>>> 150 packets, 92700 bytes
>>>>>> 5 minute offered rate 0 bps, drop rate 0 bps
>>>>>> Match: any
>>>>>>
>>>>>>
>>>>>> I am using all the deault mls qos settings....could it be my buffers
>>>>>> in
>>>>>> queue 1 holding me back? I noticed when I took of PQ, it was still at
>>>>>> 400K...Here is my map btw
>>>>>>
>>>>>> SW1#show mls qos maps cos-output-q
>>>>>> Cos-outputq-threshold map:
>>>>>> cos: 0 1 2 3 4 5 6 7
>>>>>> ------------------------------------
>>>>>> queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
>>>>>>
>>>>>>
>>>>>> SW1#
>>>>>>
>>>>>> t
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Sep 30, 2008 at 12:46 PM, Petr Lapukhov <
>>>>>> petr@internetworkexpert.com
>>>>>> > wrote:
>>>>>>
>>>>>> > Could you post your full configuration? I just ran a quick
>>>>>> simulation and
>>>>>> > it works just as it usually worked for me: PQ steals all the
>>>>>> bandwidth. E.g.
>>>>>> > between Prec 0 and Prec 5 traffic:
>>>>>> > interface FastEthernet0/1
>>>>>> > speed 10
>>>>>> > srr-queue bandwidth limit 20
>>>>>> > priority-queue out
>>>>>> >
>>>>>> > Rack1R1#show policy-map interface fastEthernet 0/0
>>>>>> > FastEthernet0/0
>>>>>> >
>>>>>> > Service-policy input: METER
>>>>>> >
>>>>>> > Class-map: PREC0 (match-all)
>>>>>> > 23 packets, 34822 bytes
>>>>>> > 30 second offered rate 1000 bps
>>>>>> > Match: ip precedence 0
>>>>>> >
>>>>>> > Class-map: PREC5 (match-all)
>>>>>> > 22739 packets, 34426846 bytes
>>>>>> > 30 second offered rate 1956000 bps
>>>>>> > Match: ip precedence 5
>>>>>> >
>>>>>> > If you change the speed limit to 30% the situation becomes as
>>>>>> following:
>>>>>> >
>>>>>> > Rack1R1#show policy-map interface fastEthernet 0/0
>>>>>> > FastEthernet0/0
>>>>>> >
>>>>>> > Service-policy input: METER
>>>>>> >
>>>>>> > Class-map: PREC0 (match-all)
>>>>>> > 136 packets, 205904 bytes
>>>>>> > 30 second offered rate 1000 bps
>>>>>> > Match: ip precedence 0
>>>>>> >
>>>>>> > Class-map: PREC5 (match-all)
>>>>>> > 96488 packets, 146082832 bytes
>>>>>> > 30 second offered rate 2716000 bps
>>>>>> > Match: ip precedence 5
>>>>>> >
>>>>>> > That is, the PQ claims all bandwidth again.
>>>>>> >
>>>>>> > HTH
>>>>>> > --
>>>>>> > Petr Lapukhov, CCIE #16379 (R&S/Security/SP/Voice)
>>>>>> > petr@internetworkexpert.com
>>>>>> >
>>>>>> > Internetwork Expert, Inc.
>>>>>> > http://www.InternetworkExpert.com
>>>>>> > Toll Free: 877-224-8987
>>>>>> > Outside US: 775-826-4344
>>>>>> >
>>>>>> > 2008/9/30 Hobbs <deadheadblues@gmail.com>
>>>>>> >
>>>>>> >> Hello,
>>>>>> >>
>>>>>> >> I know this is lengthy but I am completely stumped. I have been
>>>>>> reading
>>>>>> >> and
>>>>>> >> labbing some of the examples of 3560/3550 qos on IE's blog and I
>>>>>> have run
>>>>>> >> into some interesting issues in my lab.
>>>>>> >>
>>>>>> >> R1----\
>>>>>> >> R3----[SW1]----[SW2]-----R2
>>>>>> >> R5----/
>>>>>> >>
>>>>>> >> All ports set to 10M. R1=cos1, R3=cos3, R5=cos5, default
>>>>>> cos-output-q map.
>>>>>> >>
>>>>>> >> R2 has a policy with classes that match precedence (1,3,5) applied
>>>>>> to its
>>>>>> >> interface to meter the rate of each class.
>>>>>> >> On each router I run this command: "ping 192.168.0.2 rep 1000000
>>>>>> size
>>>>>> >> 1500"
>>>>>> >> to generate a bunch of traffic. This works great.
>>>>>> >>
>>>>>> >> Whenever I have priority-queue out on f0/13, cos 5 is always
>>>>>> limited to
>>>>>> >> 400,000K no matter if I have a "srr-queue bandwidth limit" or not.
>>>>>> In
>>>>>> >> addition, the other queues eat up the rest of the bandwidth (unless
>>>>>> I
>>>>>> >> shape
>>>>>> >> them of course). In other words, priority queuing is NOT starving
>>>>>> the
>>>>>> >> other
>>>>>> >> queues.
>>>>>> >>
>>>>>> >> Any other settings I need to check? From what I understand
>>>>>> share/shape
>>>>>> >> parameters on queue 1 don't matter when priority queue is on, and
>>>>>> in fact
>>>>>> >> they don' t affect it - 400K is always the limit!
>>>>>> >>
>>>>>> >> thanks,
>>>>>> >>
>>>>>> >> the blog is here for reference:
>>>>>> >>
>>>>>> >>
>>>>>> http://blog.internetworkexpert.com/2008/06/26/quick-notes-on-the-3560-egress-queuing/
>>>>>> >>
>>>>>> >>
>>>>>> >> Blogs and organic groups at http://www.ccie.net
>>>>>> >>
>>>>>> >>
>>>>>> _______________________________________________________________________
>>>>>> >> Subscription information may be found at:
>>>>>> >> http://www.groupstudy.com/list/CCIELab.html
>>>>>>
>>>>>>
>>>>>> Blogs and organic groups at http://www.ccie.net
>>>>>>
>>>>>>
>>>>>> _______________________________________________________________________
>>>>>> Subscription information may be found at:
>>>>>> http://www.groupstudy.com/list/CCIELab.html
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Pavel Bykov
>>> -------------------------------------------------
>>> Stop the braindumps!
>>> http://www.stopbraindumps.com/
>>>
>>>
>>
>
>
> --
> Pavel Bykov
> -------------------------------------------------
> Stop the braindumps!
> http://www.stopbraindumps.com/
Blogs and organic groups at http://www.ccie.net
This archive was generated by hypermail 2.1.4 : Sat Nov 01 2008 - 15:35:18 ARST