From: Petr Lapukhov (petrsoft@gmail.com)
Date: Tue May 30 2006 - 14:00:16 ART
Hello group,
The question I have may sound boring, but I think it would be really useful
to investigate
that matter :))) Not to mention that it touches some deep QoS topics.
To start with, let's recall FRF.12 with FRTS legacy. The main idea is to
enable fragmentation AND
interleaving. Interleaving is performed by the Dual FIFO queue at interface
level, where "small"
packets go to high priority queue, and "large", fragmented packets are
directed to low priority queue.
This is the way how interleaving works in that case. Small packets get
BETWEEN fragments of large
ones.
An important thing to remember, is that packets are *first* dequeued from
*PVC-level* queue
(which is WFQ by default, when FRF.12 is turned on). Next, packets are
compressed, and then
fragmented. Therefore, fragmentation occurs AFTER per-VC dequeueing.
Note, that fragmentation decision is based solely on *packet size*, you
voice (small) packets may
be fragmented as well :)
Now, we have that new FRF.12 at interface level (12.2(13)T):
http://www.cisco.com/univercd/cc/td/doc/product/software/ios124/124cg/hwan_c/ch05/hfrfrint.htm
What's happening here? As far as I get it, fragmentation should occur AFTER
interface level queue is
processed, and packets are compressed (payload/RTP). The question is - how
does INTERLEAVING
happen in that case? There is NO Dual FIFO that may help here (At least I
did not find it with show
commands :))
DocCD vaguely mentions that interleaving happens only when LLQ is configured
at interface level.
But that means packets should be enqueued AFTER fragmentation? Is that
possible to classify
fragmented data?
This is my doubt. Investigating a bit, I found, reading W. Odom's "CCIE R&S
Certification Guide
2006" that Dual FIFO still exist "between" software and hardware queue. But
how could one verify
that? :)
Hope I don't not bother you too much :)
Petr
This archive was generated by hypermail 2.1.4 : Thu Jun 01 2006 - 06:33:22 ART