Re: Hierarchical QoS !

From: Doan Dung Chi (dungchid@gmail.com)
Date: Fri Jul 18 2008 - 13:08:22 ART


Thanks for your explanation.
I understood how Router processes queues now. :-)

Rgds,
  ----- Original Message -----
  From: Petr Lapukhov
  To: huan@huanlan.com
  Cc: Cisco certification ; Doan Dung Chi
  Sent: 18 July, 2008 22:05
  Subject: Re: Hierarchical QoS !

  There is one significant difference, which is rooted in the fact that IOS
generic traffic-shaper implementation uses WFQ by default to service
delayed/queued packets. It's better be demonstrated using a simple example.

  Consider R3 shaping traffic towards R2 using GTS per the following
configuration (R3 connects to R2 via Serial 1/3):

  R3:
  policy-map QUEUE
   class class-default
    bandwidth 64
  !
  policy-map SHAPE
   class class-default
    shape average 80000
    service-policy QUEUE
  !
  interface Serial1/3
   description == To R2
   ip address 155.1.23.3 255.255.255.0
   no fair-queue

   service-policy output SHAPE
   clockrate 128000

  Next we generate traffic from R4, SW1 and R1 towards R2 (it does not matter
how those routers connect to R3) in such a way that all flows converge at R3's
Serial 1/3 interface . If the traffic rates are enough to make shaper work
(e.g. fast unconstrained ICMP flows) the following picture could be observed:

  Rack1R3#show traffic-shape queue
  Traffic queued in shaping queue on Serial1/3
   Traffic shape class: class-default
    Queueing strategy: weighted fair
    Queueing Stats: 64/1000/64/15626 (size/max total/threshold/drops)
       Conversations 1/2/16 (active/max active/max total)
       Reserved Conversations 1/1 (allocated/max allocated)
       Available Bandwidth 16 kilobits/sec

    (depth/weight/total drops/no-buffer drops/interleaves) 64/80/15627/0/0
    Conversation 25, linktype: ip, length: 204
    source: 155.1.146.4, destination: 150.1.2.2, id: 0x035A, ttl: 253, prot:
1

  Even though we have 3 active flows only one "conversation" appers in the
shaper's queue. This is due to the fact that embedded, child policy ("QUEUE")
specifies "bandwidht 64" which turns shaper's queue into a basic FIFO, with
just one fixed conversation. All flows effectively maps to this conversation.
Also note that "available bandwidth" is 16, which is naturally 80-64.

  Let's change the configuration to the following (remove the "bandwidth"
keyword):

  R3:
  policy-map QUEUE
   class class-default
     no bandwidth 64
  !
  policy-map SHAPE
   class class-default
    shape average 80000
    service-policy QUEUE

  This will effectively turn shaper's queue into WFQ, which could be observed
using the following command output (the three flows are still active):

  Rack1R3#show traffic-shape queue
  Traffic queued in shaping queue on Serial1/3
   Traffic shape class: class-default
    Queueing strategy: weighted fair
    Queueing Stats: 70/1000/64/24762 (size/max total/threshold/drops)
       Conversations 3/3/16 (active/max active/max total)
       Reserved Conversations 0/1 (allocated/max allocated)
       Available Bandwidth 80 kilobits/sec

    (depth/weight/total drops/no-buffer drops/interleaves) 14/32384/0/0/0
    Conversation 14, linktype: ip, length: 204
    source: 155.1.146.4, destination: 150.1.2.2, id: 0x7134, ttl: 253, prot:
1

    (depth/weight/total drops/no-buffer drops/interleaves) 3/32384/0/0/0
    Conversation 6, linktype: ip, length: 1404
    source: 155.1.13.1, destination: 150.1.2.2, id: 0x6D30, ttl: 254, prot: 1

    (depth/weight/total drops/no-buffer drops/interleaves) 53/32384/1233/0/0
    Conversation 4, linktype: ip, length: 1404
    source: 155.1.37.7, destination: 150.1.2.2, id: 0xEC49, ttl: 254, prot: 1

  Now we see 3 converstaion out of 16 maximum permitted in the shaper's queue.
Therefore, the default shaper's queue is WFQ, with the number of flows based
on the configured shaping rate.

  To summarize:

  1) GTS by default uses WFQ for shaper's queue.
  2) GTS queue could be changed by applying an embedded service-policy. This
effectively turns the queue into CBWFQ.
  3) If "bandwidth" keyword is applied under "class-default" it will turn the
queuing policy into FIFO

  Now back to the examples mentioned in the beginning of the thread. It makes
perfect sense to use "Method2", for this is how "oversubscribed" link could be
divided between two classes. Specifically, looking at:

  class-map match-all PRE2
   match ip precedence 2
  class-map match-all PRE3
   match ip precedence 3
  !
  policy-map PARENT
   class PRE2
   shape average 512000
   bandwidth 256
   class PRE3
   shape average 1024000
   bandwidth 512

  We can say that class "PRE2" is limited to 512K maximum (upper bound) but is
always guaranteed 256Kbps of interface bandwidth(lower bound) when interface
is congested. The same goes to class "PRE3" with 1024Kbps and 512Kbps values
respectively. Note that both shapers will utilize WFQ queueing method, and
bandwidth weights only apply to the software queue of the physical interface.

  As for "Method1", it turns shaper queues (both) into FIFOs and applies the
"upper limit" on respective classes traffic rates. At the same time, interface
queue will remain FIFO or WFQ (depending on interface type and speed) for all
traffic flows, since no bandwidth weights are specified under respective
classes.

  (Note that there is special behavior when "service-policy" with no
"bandwidth" or "fair-queue" settings in classes in applied to a physical
interface with FIFO queue ("no fair-queue" at physical interface). As soon as
any class in the policy-map is configured with "bandwidth" or "class-default"
is configured with "fair-queue" the interface queue will turn into CBWFQ, and
"no fair-queue" will disapper. Somewhat odd behavior).

  HTH

  --
  Petr Lapukhov, CCIE #16379 (R&S/Security/SP/Voice)
  petr@internetworkexpert.com

  Internetwork Expert, Inc.
  http://www.InternetworkExpert.com
  Toll Free: 877-224-8987
  Outside US: 775-826-4344
  Online Community: http://www.IEOC.com
  CCIE Blog: http://blog.internetworkexpert.com

  2008/7/18 <huan@huanlan.com>:

    Hmmm,

    In your example, you use MAIN INTERFACE, and I think the two
configurations do exactly the same.

    The difference exist if you try to apply the config on a sub-interface.
The second one can not be applied, because by default sub-interfaces do not
support queues.

    The first (hearachical) configuration creates queues by "shapping", and
therefore you can apply bandwidth reservation in the child policy.

    DOC CD has an example about this "work-arround". Do not remember the link
though. I am sure, you would easily find it in QoS section.

    --- On Fri, 7/18/08, Huan Pham <pnhuan@yahoo.com> wrote:
    From: Huan Pham <pnhuan@yahoo.com>
    Subject: Re: Hierarchical QoS !
    To: "Cisco certification" <ccielab@groupstudy.com>, "Doan Dung Chi"
<dungchid@gmail.com>
    Date: Friday, July 18, 2008, 9:14 PM

    Hi,

    I do not see any difference. Maybe I am color blind :-)

    Cheers,

    --- On Fri, 7/18/08, Doan Dung Chi <dungchid@gmail.com> wrote:
    From: Doan Dung Chi <dungchid@gmail.com>
    Subject: Hierarchical QoS !
    To: "Cisco certification" <ccielab@groupstudy.com>
    Date: Friday, July 18, 2008, 8:19 PM

    Hi GS !

    Please explain what is the deference btw 2 ways of QoS Configuration :

    Method 1:

    class-map match-all PRE2
     match ip precedence 2
    class-map match-all PRE3
     match ip precedence 3
    !
    policy-map PARENT
     class PRE2
     shape average 512000
     service-policy CHILD-PRE2
     class PRE3
     shape average 1024000
     service-policy CHILD-PRE3
    !
    policy-map CHILD-PRE2
     class class-default
     bandwidth 256
    policy-map CHILD-PRE3
     class class-default
     bandwidth 512
    !
    interface serial 1/0
    service-policy output PARENT

    Method 2:

    class-map match-all PRE2
     match ip precedence 2
    class-map match-all PRE3
     match ip precedence 3
    !
    policy-map PARENT
     class PRE2
     shape average 512000
     bandwidth 256
     class PRE3
     shape average 1024000
     bandwidth 512
    !

    interface serial 1/0
    service-policy output PARENT

    In my understand, Interface will allocate bandwidth of 256kbps for Class
PRE2
    & 512kbpp for Class PRE3 when congestion occur in Method 1. However, In
    Method
    2, I don't understand how Interface process queue.
    Correct me if I am wrong !

    Thanks

    _______________________________________________________________________
    Subscription information may be found at:
    http://www.groupstudy.com/list/CCIELab.html

    _______________________________________________________________________
    Subscription information may be found at:
    http://www.groupstudy.com/list/CCIELab.html

    _______________________________________________________________________
    Subscription information may be found at:
    http://www.groupstudy.com/list/CCIELab.html



This archive was generated by hypermail 2.1.4 : Mon Aug 04 2008 - 06:11:55 ART