Re: LLQ- help

From: Carlos G Mendioroz <tron_at_huapi.ba.ar>
Date: Tue, 18 Dec 2012 18:06:11 -0300

Well, I guess it's not just putting encapsulation frame that does the
trick, but using some internal DLCI to carry the traffic.

The actual config at the router tested is:

interface Serial0/1/0
  description to PSTN
  no ip address
  encapsulation frame-relay
  no keepalive
  ip rsvp bandwidth
!
interface Serial0/1/0.121 point-to-point
  description to BR-1
  ip address 10.1.6.101 255.255.255.0
  frame-relay interface-dlci 121
  ip rsvp bandwidth 48

This is an HQ router from a cvoice class I'm teaching right now, so I
don't know what part of it makes that the LLQ priority does policing,
when attached to the main interface.

I hope we all learn from this :)

-Carlos

Marko Milivojevic @ 18/12/2012 17:53 -0300 dixit:
> Well, I didn't really want to let the good learning experience go to
> waste, so I made my own test bed. Carlos - my findings are different
> than yours. I'd really wish to compre the configs.
>
> So here's a very simple test involving couple of routers:
>
>
> R1--R2--R5
>
> R2 is running various IOSs (I tried with 12.4(15)T, 12.4(24)T and with
> 15.1(3)T4). Since CBWFQ->HQF change was at 12.4(20)T, and I had no
> observable difference in behavior, I will assume that indeed my
> earlier observation that HQF did not affect LLQ in any way.
>
> This is the configuration on R2's interface facing R5:
>
> ------------------------------8<------------------------------
> interface Serial0/2/0
> bandwidth 2000
> ip address 192.168.25.2 255.255.255.0
> load-interval 30
> clock rate 2000000
> service-policy output LLQ
> !
> ------------------------------8<------------------------------
>
> As you can see, interface is configured with both physical and logical
> bandwidth of 2 Mb/s.
>
> This is the policy:
>
> ------------------------------8<------------------------------
> class-map match-all PRIORITY
> match protocol icmp
> !
> policy-map LLQ
> class PRIORITY
> priority 8 32
> !
> ------------------------------8<------------------------------
>
> It is configured for the minimum possible values for both the
> conditional policer and now infamous burst rate.
>
> There is no other traffic, excluding occasional CDP and keepalive.
> There is no dynamic routing in place.
>
> Here's the "traffic generator" from R1:
>
> ping 192.168.25.5 df size 1500 repeat 100000 timeout 1
>
> Here's the status on R2's input interface, after the ping has been
> running for some time (approximately the time it took me to type out
> the above text):
>
> ------------------------------8<------------------------------
> R2#sh int gi0/0 | i 30 sec
> 30 second input rate 582000 bits/sec, 45 packets/sec
> 30 second output rate 581000 bits/sec, 45 packets/sec
> ------------------------------8<------------------------------
>
> Here's the policy:
>
> ------------------------------8<------------------------------
> R2#show policy-map interface Serial0/2/0 output class PRIORITY
> Serial0/2/0
>
> Service-policy output: LLQ
>
> queue stats for all priority classes:
>
> queue limit 64 packets
> (queue depth/total drops/no-buffer drops) 0/0/0
> (pkts output/bytes output) 26487/39822448
>
> Class-map: PRIORITY (match-all)
> 26499 packets, 39832496 bytes
> 30 second offered rate 840000 bps, drop rate 0 bps
> Match: protocol icmp
> Priority: 8 kbps, burst bytes 32, b/w exceed drops: 12
> ------------------------------8<------------------------------
>
> Let's see if Frame Relay changes anything. R5 is set as FR DCE, R2 is
> set as DTE. No other configuration changed.
>
> ------------------------------8<------------------------------
> R2#show interface s0/2/0 | i Encaps
> Encapsulation FRAME-RELAY, loopback not set
>
> R2#show policy-map interface Serial0/2/0 output class PRIORITY
>
> Serial0/2/0
>
> Service-policy output: LLQ
>
> queue stats for all priority classes:
>
> queue limit 64 packets
> (queue depth/total drops/no-buffer drops) 0/0/0
> (pkts output/bytes output) 2011/3024544
>
> Class-map: PRIORITY (match-all)
> 2042 packets, 3071044 bytes
> 30 second offered rate 536000 bps, drop rate 0 bps
> Match: protocol icmp
> Priority: 8 kbps, burst bytes 32, b/w exceed drops: 0
> ------------------------------8<------------------------------
>
> I'm really not sure what else I need to do to show the conditional
> nature of this policer...
>
> --
> Marko Milivojevic - CCIE #18427 (SP R&S)
> Senior CCIE Instructor - IPexpert
>
> On Tue, Dec 18, 2012 at 12:10 PM, Marko Milivojevic <markom_at_ipexpert.com> wrote:
>> Didn't you see his previous message, when he tested non-FR interface?
>> Go back and check it out...
>>
>> --
>> Marko Milivojevic - CCIE #18427 (SP R&S)
>> Senior CCIE Instructor - IPexpert
>>
>> On Tue, Dec 18, 2012 at 12:05 PM, Paul Negron <negron.paul_at_gmail.com> wrote:
>>> Actually, I'm not aware that he disproved it. In fact, He proved it with
>>> Frame Relay by showing you that he was NOT congested and still experienced
>>> drops. How did that support my wrong case?
>>>
>>> For nowb&lets agree to disagree.
>>>
>>> When I show you that the burst rate WILL INDEED affect the traffic even when
>>> the link is NOT congested, You may change your mind or I will if it is
>>> proven otherwise.
>>>
>>> Agreed?
>>>
>>> Paul
>>>
>>> Paul Negron
>>> CCIE# 14856
>>> negron.paul_at_gmail.com
>>>
>>>
>>>
>>> On Dec 18, 2012, at 2:53 PM, Marko Milivojevic <markom_at_ipexpert.com> wrote:
>>>
>>> You are aware that Carlos disproved what you were saying all along,
>>> except for Frame Relay? Given how Frame Relay has its own QoS
>>> mechanisms (granted, probably not at play here), I don't see how this
>>> is supporting your (wrong) case :-)
>>>
>>> Anyway... Feel free to misunderstand how LLQ works. I'll keep
>>> understanding it well and we're all happy...
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>> On Tue, Dec 18, 2012 at 11:40 AM, Paul Negron <negron.paul_at_gmail.com> wrote:
>>>
>>> Very well done Carlos!!!
>>>
>>> That burst is pesky and a pain !!!!! This is similar to what I saw in my
>>> "INVALID TEST"
>>>
>>> This is the reason why they removed it in IOS-XR as a default.
>>>
>>> Now, I would think that this output should help others in putting this to
>>> bed. Any complaints against it would be pure pride and speculation.
>>>
>>>
>>> Paul Negron
>>> CCIE# 14856
>>> negron.paul_at_gmail.com
>>>
>>>
>>>
>>> On Dec 18, 2012, at 11:44 AM, Carlos G Mendioroz <tron_at_huapi.ba.ar> wrote:
>>>
>>> Yes, there are policy drops:
>>>
>>> HQ-1#sh policy-map interface
>>> Serial0/1/0
>>>
>>> Service-policy output: prioUDP
>>>
>>> queue stats for all priority classes:
>>>
>>> queue limit 64 packets
>>> (queue depth/total drops/no-buffer drops) 0/0/0
>>> (pkts output/bytes output) 297/292842
>>>
>>> Class-map: udp (match-all)
>>> 460 packets, 453560 bytes
>>> 5 minute offered rate 10000 bps, drop rate 6000 bps
>>> Match: access-group name udp
>>> Priority: 100 kbps, burst bytes 2500, b/w exceed drops: 163
>>>
>>> Without the service policy, all traffic flies...
>>>
>>> -Carlos
>>>
>>> Marko Milivojevic @ 18/12/2012 13:33 -0300 dixit:
>>>
>>> Can you post the output of "show frame pvc" when you tested the FR? I
>>> would be very careful jumping to any conclusions (which you did not),
>>> as something other than the policer could be dropping those packets.
>>> Did you see the hit counter increase on the drops in the class?
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>> On Tue, Dec 18, 2012 at 7:26 AM, Carlos G Mendioroz <tron_at_huapi.ba.ar>
>>> wrote:
>>>
>>> Just tested this under 15.1.1T @ 2811.
>>>
>>> Incoming interface fastEthernet, outgoing serial.
>>> Monitoring TX on serial via snmp, generating with a script udp traffic
>>> at a constant rate.
>>>
>>> Baseline: 100K and 200K both are seen at TX on serial.
>>> Check1:
>>> class-map match-all udp
>>> match access-group name udp
>>> policy-map prioUDP
>>> class udp
>>> priority 100
>>> interface Serial0/1/1
>>> service-policy output prioUDP
>>> ip access-list extended udp
>>> permit udp any any
>>>
>>> Both 100K and 200K seen on TX on serial.
>>>
>>> That was my understanding. (no congestion, no policing).
>>>
>>> But... same code, same config on an interface that has frame relay, does
>>> drop packets even when not congested.
>>>
>>>
>>> To play with this, all you need is one router (real one, no dynamips to test
>>> QoS please :), and some time.
>>> I can provide a perl script that generates udp traffic. Also copy of a small
>>> SNMP interface traffic graphing tool which is handy.
>>> (Interface Traffic Indicator, InfTraf.exe
>>> Version 1.1.0; April 2004
>>> Software by Carsten Schmidt)
>>>
>>>
>>> -Carlos
>>>
>>>
>>>
>>>
>>> Carlos G Mendioroz @ 18/12/2012 06:56 -0300 dixit:
>>>
>>> May I ? :)
>>>
>>> It might be that the whole issue is that:
>>> -the behaviour changed in some point in time
>>> -the behaviour is different in some architecture
>>> -some test was done with some issue that drove a false idea on someone
>>>
>>> I have not tested this latelly, but it used to be the case that the
>>> policer would not be there when not congested. Fact, tested by many.
>>>
>>> I will retest this ASAP to (again ?) be ascertive about it. I respect
>>> Paul and it may be that with some code (an some arch) this has changed.
>>> After all, it would make sense for cisco to impose a policer on a
>>> priority queue always, because that's how most people believe it would
>>> behave.
>>>
>>> The burst size may just be a measurement parameter. After all, instant
>>> rate is always input interface speed, right ? You for any throughput
>>> metering, you need some time slots, which might not be aligned, and some
>>> bursting slack.
>>>
>>> As to whether there is or not a queue, it would be very hard to be
>>> conclusive, because the TX ring will always behave as one. But what
>>> difference would it make, or if it would be needed at all given that
>>> it is priority and should be below the output if rate, I don't know nor
>>> care :)
>>>
>>> I would like this NOT to be taken offline. We all can learn. I would
>>> also like to everyone to agree to a self imposed rate limit, may be
>>> exponential, to filter any impulse driven answer. It's an important
>>> subject, IMHO.
>>>
>>> -Carlos
>>>
>>>
>>>
>>> Marko Milivojevic @ 18/12/2012 02:14 -0300 dixit:
>>>
>>>
>>> Oh, I understand it very well... This has *nothing* to do with burst,
>>> as I said hours ago... :-) It has something to do when a strict
>>> scheduler is in effect. It's in effect when software queueing is in
>>> effect and is in effect when lower layer (for the lack of better term
>>> - TX, parent shaper) signal they are congested (TX) or they exist
>>> (shaper).
>>>
>>> Now, your message I'm responding to clearly shows you really
>>> misunderstand how CBWFQ works. There is no policer there. Conditional
>>> policer exists only in the LLQ. Unfortunately, I'm off to watch The
>>> Hobbit now, so I'll have to explain better in couple of hours.
>>>
>>> PRIORITY keyword does not create a "PRIORITY QUEUE". It creates LLQ,
>>> which I downright * refuse* call by the term used in IOS for something
>>> else.
>>>
>>> If you're curious. Create LLQ with 2 Mb/s priority. Send 10 Mb/s of
>>> the traffic that matches, but *no* other traffic. Ensure that you're
>>> not oversubscribing the outgoing interface. What will happen?
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>>
>>> On Mon, Dec 17, 2012 at 9:03 PM, Paul Negron <negron.paul_at_gmail.com>
>>> wrote:
>>>
>>>
>>> Marko,
>>>
>>> There are 2 distinct things in play for LLQ.
>>>
>>> 1) CBWFQ scheduler- This operates exactly the way you have been
>>> stating the entire time. Congestion must be in effect for this
>>> scheduler to be operating effectively.
>>>
>>> 2) The priority Class- I think you are very mistaken about this part
>>> of LLQ. The fact that you did not understand the "Burst" proves this.
>>> Not that this is a bad thing. SO what if you did not know. Does not
>>> mean I think less of you.;-)
>>>
>>> You keep speaking about LLQ from only one of the above perspectives.
>>>
>>> I understand the multiple input interfaces deal. I was not testing
>>> the Queuing, that is very straight forward.
>>>
>>> I was testing the Policer in the Priority Class. Ya know the part
>>> that makes LLQ different from CBWFQ. You are speaking as if they
>>> behave the same when they don't.
>>>
>>> I think I see where we MAY be speaking past each other but let me
>>> clarify. I was making a point so EVERYONE would understand how the
>>> Priority Q works which is very different then what MOST people think.
>>> The statement from the point I was referencing was about the
>>> "PRIORITY" keyword, which means it is participating as a Priority Queue.
>>>
>>> Paul
>>>
>>> Paul Negron
>>> CCIE# 14856
>>> negron.paul_at_gmail.com
>>> 303-725-8162
>>>
>>>
>>>
>>> On Dec 17, 2012, at 11:33 PM, Marko Milivojevic <markom_at_ipexpert.com>
>>> wrote:
>>>
>>> And mind you :-). I was not the one who talked about flows. I talked
>>> about different interfaces or classes in the same policies. Two flows
>>> in the same queue coming from the same input interface be it 1 or 19
>>> phones is still 1 input 1 output. To see the queueing, you need
>>> multiple input interfaces. Think of a Y.
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>> On Mon, Dec 17, 2012 at 8:31 PM, Marko Milivojevic
>>> <markom_at_ipexpert.com> wrote:
>>>
>>>
>>> Paul,
>>>
>>> If there was no congestion on the TX ring, there was no LLQ. TX ring
>>> congestion is what signals to IOS that software queueing needs to be
>>> engaged. Your test was flawed, sorry to say.
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>> On Mon, Dec 17, 2012 at 8:25 PM, Paul Negron
>>> <negron.paul_at_gmail.com> wrote:
>>>
>>>
>>> I have tested it precisely!
>>>
>>> I put Voice traffic into the Priority Class and left the burst to
>>> default.
>>>
>>> I placed enough voice calls to equal the amount of traffic I used
>>> with the
>>> "priority" command (4 calls at 32K each/NO VAD enabled). ALL
>>> traffic passed
>>> and was not rejected. I placed a 5th call and it also went through
>>> with no
>>> problem because it did not exceed the burst rate parameter (Voice
>>> is not
>>> bursty). The second I placed another call, ALL of the Voice flows
>>> were
>>> negatively impacted. The priority class began dropping traffic! It
>>> reacted
>>> as if it was receiving burst traffic that exceeded what it would
>>> allow.
>>>
>>> When I extended the Burst parameter, ALL of the Voice call issues
>>> cleared
>>> up.
>>>
>>> There was NO congestion on the transmit ring at ANY time during
>>> this test.
>>>
>>>
>>> I also performed the same test with Live Video but the results were
>>> devastating due to the extreme Bursty nature of the traffic I was
>>> using. I
>>> needed to extend the "BURST" parameter extensively due to it's
>>> extreme
>>> restrictive default.
>>>
>>> This is why some people misspeak and say that the Priority class is a
>>> maximum value. It's true in that it binds the high end bandwidth
>>> but it does
>>> ALLOW you to burst and squeeze a little bit more by default. It's
>>> just
>>> REALLY restrictive. It does not enforce the 1 to 2 second
>>> recommendation.
>>>
>>> I still disagree with your example of where you " MAY SEE"
>>> queueing of
>>> packets since I have NOT been able to prove it to this point. I
>>> did not ask
>>> you to show me the packets to be confrontational or argumentative. I
>>> actually thought I was going to learn something in this
>>> conversation about
>>> how the Priority Queue actually buffers packets. I don't know what
>>> command
>>> you used to verify this.
>>>
>>> This is why I am NOT confused about how LLQ works. I understood
>>> what the
>>> BURST parameter actually does. I am NOT guessing.
>>>
>>> Policing will impose its constraint weather you are congested on
>>> the TX ring
>>> or NOT. Same goes for Shaping!
>>>
>>> Paul
>>>
>>> Paul Negron
>>> CCIE# 14856
>>> negron.paul_at_gmail.com
>>>
>>>
>>>
>>> On Dec 17, 2012, at 10:19 PM, Marko Milivojevic
>>> <markom_at_ipexpert.com> wrote:
>>>
>>> On Mon, Dec 17, 2012 at 7:11 PM, Marko Milivojevic
>>> <markom_at_ipexpert.com>
>>> wrote:
>>>
>>>
>>> Yeah, I've seen that in the command reference as well. It's not
>>> exactly well documented what it does.
>>>
>>>
>>> What I suspect though (and this is purely speculation) is that it
>>> allows the traffic to burst for the specified time when the LLQ is
>>> engaged, which means when TX ring (or other choke point, i.e. shaper
>>> in the parent class) trigger a congestion. Since there's no LLQ when
>>> there's no congestion, I don't see how this parameter is at all
>>> relevant when LLQ is not active. That's the thing with your statement
>>> about 30 seconds that I mostly disagree with.
>>>
>>> --
>>> Marko Milivojevic - CCIE #18427 (SP R&S)
>>> Senior CCIE Instructor - IPexpert
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Carlos G Mendioroz <tron_at_huapi.ba.ar> LW7 EQI Argentina
>>>
>>>
>>> --
>>> Carlos G Mendioroz <tron_at_huapi.ba.ar> LW7 EQI Argentina
>>>
>>>
>>>

-- 
Carlos G Mendioroz  <tron_at_huapi.ba.ar>  LW7 EQI  Argentina
Blogs and organic groups at http://www.ccie.net
Received on Tue Dec 18 2012 - 18:06:11 ART

This archive was generated by hypermail 2.2.0 : Tue Jan 01 2013 - 09:36:53 ART