Thanks Marko, Pls find the output. It is 6724-SFP module.. I believe it has
1.2:1 subscription and we are only using 2 ports in these module.
Here is the output.
GigabitEthernet7/1 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 001a.e2fa.8200 (bia
001a.e2fa.8200)
Description: MCWD01 g7/1 to MCRC01 g7/1
Internet address is X.X.X.X/30
MTU 1560 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 8/255, rxload 6/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is LH
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 12:49:09
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 26131
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 25745000 bits/sec, 6297 packets/sec
30 second output rate 32664000 bits/sec, 8255 packets/sec
L2 Switched: ucast: 985662 pkt, 76200416 bytes - mcast: 19993 pkt, 1746141
bytes
L3 in Switched: ucast: 136681880 pkt, 55359230030 bytes - mcast: 0 pkt, 0
bytes mcast
L3 out Switched: ucast: 143660930 pkt, 61006436251 bytes mcast: 0 pkt, 0
bytes
137739519 packets input, 55462587036 bytes, 0 no buffer
Received 24701 broadcasts (2057 IP multicasts)
0 runts, 169733 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
144669115 packets output, 62231705511 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
GigabitEthernet7/5 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 001a.e2fa.8200 (bia
001a.e2fa.8200)
Description: MCWD01 g7/5 to MCRC02 g7/1
Internet address is X.X.X.X/30
MTU 1560 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 5/255, rxload 4/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is LH
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 12:49:51
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 30829
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 18466000 bits/sec, 4701 packets/sec
30 second output rate 19645000 bits/sec, 6727 packets/sec
L2 Switched: ucast: 842993 pkt, 61018930 bytes - mcast: 18770 pkt, 1660280
bytes
L3 in Switched: ucast: 126079051 pkt, 50881092114 bytes - mcast: 0 pkt, 0
bytes mcast
L3 out Switched: ucast: 134278036 pkt, 57388511678 bytes mcast: 0 pkt, 0
bytes
126982662 packets input, 50967406677 bytes, 0 no buffer
Received 22619 broadcasts (1044 IP multicasts)
0 runts, 12654 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
135088285 packets output, 58515937777 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Also i would like to know if WRR queues are considered as Hardware queues.??
and in case i sniff the packets would i be able to sniff the drop packets as
well.
thanks
On 4 May 2010 12:37, Marko Milivojevic <markom_at_ipexpert.com> wrote:
> Could we see "show int" output for the relevant interface, please?
>
> What kind of LC is this? What is the fabric utilization? What is the
> fabric switching mode?
>
> --
> Marko Milivojevic - CCIE #18427
> Senior Technical Instructor - IPexpert
>
> YES! We include 400 hours of REAL rack
> time with our Blended Learning Solution!
>
> Mailto: markom_at_ipexpert.com
> Telephone: +1.810.326.1444
> Fax: +1.810.454.0130
> Web: http://www.ipexpert.com/
>
> On Tue, May 4, 2010 at 19:09, naman sharma <naman.prep_at_gmail.com> wrote:
> > Thanks all for your replies. Well it is 1 Gig and full duplex on both the
> > side and it is not hardcoded. Flow control is off on both the sides for
> > input and output traffic.
> >
> > So these 2 routers are in MPLs domain with one being PE and the other
> being
> > P router and i see output drops on the PE router towards P router. PE
> router
> > has mls qos enabled and right now the interface in the MPLS domain shows
> all
> > the traffic in cos 0 and hence in Queue 1 and there is where i see the
> > drops.
> >
> > Interface GigabitEthernet7/1 queueing strategy: Weighted Round-Robin
> > Port QoS is enabled
> > Trust boundary disabled
> >
> > Trust state: trust COS
> > Extend trust state: not trusted [COS = 0]
> > Default COS is 0
> > Queueing Mode In Tx direction: mode-cos
> > Transmit queues [type = 1p3q8t]:
> > Queue Id Scheduling Num of thresholds
> > -----------------------------------------
> > 01 WRR 08
> > 02 WRR 08
> > 03 WRR 08
> > 04 Priority 01
> >
> > WRR bandwidth ratios: 100[queue 1] 150[queue 2] 200[queue 3]
> > queue-limit ratios: 50[queue 1] 20[queue 2] 15[queue 3] 15[Pri
> > Queue]
> >
> > queue tail-drop-thresholds
> > --------------------------
> > 1 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> > 2 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> > 3 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> >
> > queue random-detect-min-thresholds
> > ----------------------------------
> > 1 40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
> > 2 40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
> > 3 70[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]
> >
> > queue random-detect-max-thresholds
> > ----------------------------------
> > 1 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> > 2 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> > 3 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> >
> > WRED disabled queues:
> >
> > queue thresh cos-map
> > ---------------------------------------
> > 1 1 0
> > 1 2 1
> > 1 3
> > 1 4
> > 1 5
> > 1 6
> > 1 7
> > 1 8
> > 2 1 2
> > 2 2 3 4
> > 2 3
> > 2 4
> > 2 5
> > 2 6
> > 2 7
> > 2 8
> > 3 1 6 7
> > 3 2
> > 3 3
> > 3 4
> > 3 5
> > 3 6
> > 3 7
> > 3 8
> > 4 1 5
> >
> > Queueing Mode In Rx direction: mode-cos
> > Receive queues [type = 1q8t]:
> > Queue Id Scheduling Num of thresholds
> > -----------------------------------------
> > 01 WRR 08
> >
> > WRR bandwidth ratios: 100[queue 1]
> > queue-limit ratios: 100[queue 1]
> >
> > queue tail-drop-thresholds
> > --------------------------
> > 1 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
> >
> > queue thresh cos-map
> > ---------------------------------------
> > 1 1 0 1 2 3 4 5 6 7
> > 1 2
> > 1 3
> > 1 4
> > 1 5
> > 1 6
> > 1 7
> > 1 8
> >
> >
> > Packets dropped on Transmit:
> > BPDU packets: 0
> >
> > queue dropped [cos-map]
> > ---------------------------------------------
> >
> > 1 295660 [0 1 ]
> > 2 0 [2 3 4 ]
> > 3 0 [6 7 ]
> > 4 0 [5 ]
> >
> > Packets dropped on Receive:
> > BPDU packets: 0
> >
> > queue dropped [cos-map]
> > ---------------------------------------------
> > 1 0 [0 1 2 3 4 5 6 7 ]
> >
> > Now i can increase the queue limit but that will add delay to the packets
> > sitting in the queue and can lead to other issues. Pls suggest.
> >
> > thanks
> > naman
> >
> > On 4 May 2010 11:36, Marko Milivojevic <markom_at_ipexpert.com> wrote:
> >>
> >> You are absolutely right... If it's indeed GigE speed we're talking
> >> about here. However, we only have the information that interface
> >> itself is GigE, but as we know, we have those "10/100/1000" interfaces
> >> - they are prone to this kind of thing.
> >>
> >> If it's GigE speed on the link, then I would personally look at QoS
> >> and especially flow-control, as personally I had quite a few issues
> >> with it and Cisco swouters.
> >>
> >> --
> >> Marko Milivojevic - CCIE #18427
> >> Senior Technical Instructor - IPexpert
> >>
> >> YES! We include 400 hours of REAL rack
> >> time with our Blended Learning Solution!
> >>
> >> Mailto: markom_at_ipexpert.com
> >> Telephone: +1.810.326.1444
> >> Fax: +1.810.454.0130
> >> Web: http://www.ipexpert.com/
> >>
> >> On Tue, May 4, 2010 at 18:32, Ryan West <rwest_at_zyedge.com> wrote:
> >> > Hey Marko,
> >> >
> >> >> -----Original Message-----
> >> >> Sent: Tuesday, May 04, 2010 2:16 PM
> >> >> To: Narbik Kocharians
> >> >> Cc: itguy.pro_at_gmail.com; Kambiz Agahian; naman sharma; Cisco
> >> >> certification
> >> >> Subject: Re: Output Drops on Gig Interface
> >> >>
> >> >> On Tue, May 4, 2010 at 17:39, Narbik Kocharians <narbikk_at_gmail.com>
> >> >> wrote:
> >> >> > That is true, the end that is in half Duplex mode should get "late
> >> >> > collisions" and the end that is in full duplex mode should get "CRC
> >> >> > checks",
> >> >> > whereas, a mismatch in Speed (Which i don't think that could be the
> >> >> > problem
> >> >> > that you are experiencing) should show as "NOTCONNECTED".
> >> >>
> >> >> Quite right, however, if duplex is not hardcoded, but speed is, it
> >> >> would not be negotiated in most cases. Cisco used to default to
> >> >> half-duplex in this case. I've seen quite a few issues caused by
> >> >> configuring only parts of the speed/duplex pair.
> >> >>
> >> >> If any of them is set manually, negotiation is disabled. To negotiate
> >> >> speed and duplex, both need to be set to auto.
> >> >>
> >> >
> >> > It was my understanding that by default, all devices are supposed to
> >> > perform autonegotiation as 802.3z does not specifically define a way
> to turn
> >> > it off. Also, Cisco devices do not support half-duplex Gig and the
> standard
> >> > does not have support for it either. With link negotiation turned
> off, the
> >> > device with autonegotiation turned off will report up and the other
> side
> >> > will be down.
> >> >
> >> > I have not tested all of these scenario's in great detail, so in
> >> > practice it might differ slightly.
> >> >
> >> > -ryan
Blogs and organic groups at http://www.ccie.net
Received on Wed May 05 2010 - 10:27:09 ART
This archive was generated by hypermail 2.2.0 : Tue Jun 01 2010 - 07:09:52 ART