From: Scott Vermillion (scott_ccie_list@it-ag.com)
Date: Sat Mar 01 2008 - 21:12:38 ARST
So just a quick addendum to my earlier response on the clock thing...
I was rummaging around in my pile of WICs, etc, this afternoon when I came
across an old WIC-1DSU, which brought back a flood of memories from when
those were an everyday part of my professional life. Now again, in a
production environment, I'm going to tell my carrier to provide clock (how
they do that is another discussion altogether, but it typically involves a
DACS tied back to a BITS). But in a back-to-back situation with one of
these, I don't think you even set the clock rate at all. You either specify
"internal," in which case it's 1.544 (because the card itself is introducing
the framing overhead), or it's "line" (in which case it's still 1.544, it's
just not coming from an oscillator on the card). The only speed you can set
is at the channel level (56 kbps or 64 kbps). But either way you wind up
with the same aggregate line rate with these serial boards that have
built-in CSU/DSUs.
With the non-DSU type of serial cards, though, which I'm guessing the
original question applied to (DUH - it said "hdlc" in the subject), what you
set it to for in a back-to-back lab environment is really your call. IIRC,
most cards will either throw you an error or will automatically "round" to
the nearest supported rate by the given hardware if you choose something it
doesn't particularly care for. I did a handful of rack rentals and also
those IE mock labs towards the end of my lab prep. Seems to me some of the
serial stuff in some of those boxes was the older low-speed hardware and you
were fairly limited in what you could configure. But with the WIC-2Ts and
WIC-4Ts, etc, IIRC, you can even chose to set the clock to exactly 2 Mbps if
you choose; it doesn't necessarily have to be some 64 kbps multiple in that
case. So I guess the best answer is that it's HW-dependent, but it's not
all that important one way or the other for lab purposes. All you're really
carrying is IGP hellos, CDP, and so forth, so 56 kbps would actually work
just fine. The really important thing is to have a clock rate configured at
all on the DCE side, regardless of what it may be.
Just thought I would attempt to clarify that, since in retrospect my
original response actually seems to have bounced between two different
possible scenarios, one of which doesn't seem to have been asked about (hey
- serial and T-Carrier went out with platform shoes and bellbottoms,
right?)...
-----Original Message-----
From: Scott Vermillion [mailto:scott_ccie_list@it-ag.com]
Sent: Friday, February 29, 2008 9:43 AM
To: 'Sadiq Yakasai'; 'Radioactive Frog'
Cc: 'Santi'; 'John'; 'ccielab@groupstudy.com'
Subject: RE: hdlc clock rate
In a typical production environment (in my experience), you're likely going
to derive clock from your service provider (in other words, you're going to
be the DTE on both ends of the link and the carrier gear is the DCE on both
ends). So you don't set a clock rate at all. If you're doing back-to-back
stuff, though, and you're setting up an E1, IIRC you set the clock rate to
the full 2048 kpbs (likewise, you set your clock for a T-1 to 1544 vs 1536).
This config statement dictates the rate at which ones and zeros are signaled
on the wire and is thus not concerned with how many might be overhead and
how many might be payload.
As for the BW thing, no not spillover, per se - not directly I don't think.
But you're going to screw things up like QoS and possibly metric
calculations. For example, imagine if you were only signaling at 128 kbps
but you configured BW to be 512 kbps and then assigned 256 kbps to a
priority class (and then offered that much load to the circuit)? I think
you'd wind up with tail drop in your priority queue, which would be a lot
like not having a priority queue (then again, you'll always experience tail
drop if you offer twice the load that the circuit can handle!...hmmm).
Also, my guess in this case would be that you might end up with a PQ-like
behavior, with starvation of non-priority classes, as the router would
constantly be trying to service this over-subscribed priority queue. Not
really sure about that last one, though. I guess it comes down to the exact
mechanics of how the scheduler works. I just recently finished "Inside
Cisco IOS Software Architecture," which does cover QoS and so forth. But I
don't recall reading anything definitive as to what would happened in this
case. I guess most books don't cover what happens when you intentionally
hose something up pretty badly... ;~)
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of
Sadiq Yakasai
Sent: Friday, February 29, 2008 8:30 AM
To: Radioactive Frog
Cc: Santi; John; ccielab@groupstudy.com
Subject: Re: hdlc clock rate
I like to think of clock rate referring to the actual rate at which
the bits are transmitted physically on the wire. Its a physical layer
attribute I wld say.
While configured bandwidth refers to the usable bandwidth to be
utilized by the protocol in question on the link.
But i keep thinkin, what happens when u configure a BW statement on a
link to a value higher than the actually clocking rate on the
interface? Does that result in dropped or spillage of traffic or what?
This archive was generated by hypermail 2.1.4 : Tue Apr 01 2008 - 07:53:51 ART