Re: Bandwidth 1544 vs 1536

From: Howard C. Berkowitz (hcb@gettcomm.com)
Date: Sun Apr 11 2004 - 18:54:21 GMT-3


At 2:28 PM -0700 4/11/04, Ahmed Mustafa wrote:
>What is the correct way to interpret Serial interface bandwidth. Usually, it
>is said that by default Serial interface bandwidth is equivalent to T1
>bandwidth.
>
>It doesn't seem to be correct since the T1 bandwidth is 1536000 64000KB x 24
>Channels, and the serial port bandwidth is 1544000.

No. The bandwidth of a T1 is 1544000. Its payload is 1536000. Cisco
_generally_ considers bandwidth equivalent to 1/bit time, whether the
clocking is internal or external. See the exception below.

>
>Two Questions:
>
>1) If the tasks states the T1 bandwidth so should we change the serial port
>bandwidth from 1544 to 1536 or leave it alone.

The bandwidth is 1544, which happens to be the default bandwidth for
a Cisco serial port.

>
>2) In a lab environment, we usually use DCE/DTE cable and whatever the
>clockrate is configured for it that fast the link would be then. If I
>configure my clock rate 64000KB then it is obvious that the DCE/DTE would be
>sending bits at the rate of 64000 bits per second regardless of what the
>bandwidth parameters are set for.

There is no automatic linkage between bandwidth and clockrate. This
is the exception I mentioned below. It is common practice to force an
artificially low bandwidth parameter on a link to make it less
preferred.

That will distort the throughput measurements in show and SNMP
information, but, if you know you are artificially setting the
bandwidth, your analysis software can correct throughput.

>
>I am confused how I would actually configure unequal cost load-balancing for
>EIGRP.
>
>The task states to use the T1 bandwidth so I must change the default serial
>bandwidth from 1544 to 1536, but the clock rate I configured is only 64000KB.
>The question how should I go about configuring load-balancing by keeping
>everybody happy.
>

Who came up with this task? Changing 1544 to 1536 is silly. The 8Kbps
difference is lost in the noise with any EIGRP variance setting.

In real-world practice, having one link at 1544 and one at 64 will
give truly terrible performance, especially with respect to
out-of-sequence packets.

Even Cisco's general rule is not to do unequal cost per-packet with
links having a variance of more than 2 or 3. With a variance of 24,
in the scenario you describe, the difference in per-bit propagation
delay becomes more important to protocols than speed alone.

Per-packet unequal cost load balancing is really a last-resort tool
-- realistically, it seemed like a good idea when IGRP was first
developed, but has been shown operationally to often be more trouble
than it's worth. If you do source-destination load balancing,
possible with CEF, it's a little better.



This archive was generated by hypermail 2.1.4 : Mon May 03 2004 - 19:48:46 GMT-3