From: Alexander Arsenyev (GU/ETL) (alexander.arsenyev@ericsson.com)
Date: Thu Jan 06 2005 - 07:17:25 GMT-3
Hello,
- Routing protocol traffic queueing depends on platform, configuration (7500 series), packet type (OSPF HELLO is marked with pak_priority, other OSPF packets are not) and IOS version
http://www.cisco.com/warp/public/105/rtgupdates.html
http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/120s/120s28/12sclocp.htm
- Have You enabled "frame-relay fragment 80" on both ends of the PVC?
If not then remote router won't attempt to reassemble FRF.12 fragments
because it doesn't have FRF.12 enabled!
http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fqos_c/fqcprt5/qcfrsvfr.htm#1005211
HTH,
Cheers
Alex
#13405
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com]
Sent: 05 January 2005 21:18
To: ccielab@groupstudy.com
Subject: MQC-based FRTS & dual FIFO
- When using dual FIFO as a result of FRF.12 , control & voice packets get
placed in the high FIFO queue while the rest get fragmented & placed
in the second queue. If you set the fragment size too low, will routing
protocol traffic ( OSPF hellos ) be fragmented & placed in the secondary
queue ?
- Does routing protocol traffic get put into the llq by default due to
pakpriority or is this a L2 queuing process ?
- From the priority debug I assume 0 is the high & 2 is default ? ( In my
config I have a llq & class-default policy in a FRTS mapp-class )
*Mar 1 03:29:58.515: PQ: Serial0/0 output (Pk size/Q 49/0)
*Mar 1 03:29:58.519: PQ: Serial0/0 output (Pk size/Q 46/2)
- I noticed a strange behavior where L3 traffic ( ospf hellos, telnet )
were being dropped by using a low (<100) fragment setting.
Can someone explain this ?
| | S0/0.1 __________________s0/0/.1 | |
| R1 | | R2
|
| | s0/0.2 ___________________s0/0.2 | |
R2
!
class-map match-all telnet
match access-group 100
!
!
policy-map qos
class telnet
priority percent 5
policy-map shape
class class-default
shape average 1544000
shape adaptive 768000
service-policy qos
!
interface Serial0/0.1 point-to-point
ip address 192.168.1.1 255.255.255.252
frame-relay interface-dlci 100
class dave
!
interface Serial0/0.2 point-to-point
ip address 192.168.7.1 255.255.255.252
frame-relay interface-dlci 400
!
map-class frame-relay dave
service-policy output shape
frame-relay fragment 80 ( not a realistic # for a T1 )
!
After enabling fragment in the map class on s0/0.1
R2
R2#sh ip ospf nei
Neighbor ID Pri State Dead Time Address Interface
1.1.1.1 0 FULL/ - 00:00:37 192.168.7.2 Serial0/0.2
1.1.1.1 0 INIT/ - 00:00:37 192.168.1.2 Serial0/0.1
Serial0/0 is up, line protocol is up
Hardware is QUICC Serial
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation FRAME-RELAY, loopback not set
Keepalive set (10 sec)
LMI enq sent 252, LMI stat recvd 252, LMI upd recvd 0, DTE LMI up
LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0
LMI DLCI 1023 LMI type is CISCO frame relay DTE
Broadcast queue 0/64, broadcasts sent/dropped 618/0, interface
broadcasts 534
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 00:42:07
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1
Queueing strategy: dual fifo
Output queue: high size/max/dropped 0/256/0
Output queue: 0/128 (size/max)
*** changed fragment from 80 to 1600
R2(config-map-class)#frame-relay fragment 1600
R2(config-map-class)#^Z
R2#
*Mar 1 00:46:49.223: %SYS-5-CONFIG_I: Configured from console by vty1
(192.168.
7.2)debug ip ospf
*Mar 1 00:46:51.267: %OSPF-5-ADJCHG: Process 100, Nbr 1.1.1.1 on
Serial0/0.1 fr
om LOADING to FULL, Loading Donea
*Mar 1 00:47:02.203: %OSPF-5-ADJCHG: Process 100, Nbr 33.33.33.33 on
Serial0/0.
1 from LOADING to FULL, Loading Doned
- Where do you see the most benefit from using the dual FIFO buffers (
given a BW > than 768k ) ?
Thanks !
Regards,
Dave
______________________________________________
Architecture & Engineering
Work: (973) 682-4435
Cell: (973)907-4963
This archive was generated by hypermail 2.1.4 : Wed Feb 02 2005 - 22:10:19 GMT-3