From: Andrew Lee Lissitz (alissitz@corvil.com)
Date: Mon Feb 28 2005 - 11:25:29 GMT-3
Great points James! For most traditional monitoring tools you are going to
overload the links with monitoring data when polling the remote sites!
This is the trade off; do you want data that is precise and actionable @ the
expense of the link bandwidth, or do you want data that does not represent
the nature, burstiness, and exact BW requirements? Tough decision to make!
These statements are true for traditional tools like MRTG James, but a new
tool called corvil only sends 300 bytes back to the management station every
5 minutes. It does not overload the link with management data and yet still
gives actionable data that relates to QoS targets (latency / jitter, BW
requirements to meet QoS targets, and loss targets).
For more information on this and any monitoring solution please email or
call me directly. If you would like some whitepapers etc... just let me
know offline. I would enjoy the opportunity to help!
Kindest Regards James and all,
Andrew Lee Lissitz
908.303.4762
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of
Keane, James
Sent: Monday, February 28, 2005 5:45 AM
To: Andrew Lee Lissitz; David Heaton; ali; ccielab
Subject: RE: ATM subinterface - and PCR/SCR :: Burst/CIR FrATM mapping?
Thats a common misconception about min polling interval time of 5 mins for
MRTG for about (roughly) 2 years now MRTG is using RRD databases, the
polling interval can be what you like and the row count (data points of your
database) as many as you like, so every real sample is displayed on the
daily graph. (not averages of samples like before).
I take a sample every minute, I could do it every 5 seconds but I dont want
to cause congestion on the very lines I am concerned about (unfortunately, I
can only view the far end)
You do need to poll the correct OID to get the true value of the load (and
not just the x/255 5 min average that is the default)
BUT I accept your point on -
'When sampling BW and application traffic; the sample time needs to be as
low
as possible in order to show the true utilization levels and nature / affect
of bursty traffic. '
I would add make sure you know the impact the software you are using has on
the line, interface and CPU
before you go adding extra latency problems to your load problems!
James
-----Original Message-----
From: Andrew Lee Lissitz [mailto:alissitz@corvil.com]
Sent: 26 February 2005 13:30
To: 'David Heaton'; 'ali'; 'ccielab'
Subject: RE: ATM subinterface - and PCR/SCR :: Burst/CIR FrATM mapping?
Good morning all, I can not comment on the ATM configs, and appreciate all
the brilliant folk who post here, but concerning the monitoring BW question
I can offer some guidance.
When measuring BW utilization levels; always ensure that your measurements
are the smallest times samples possible. These times range from 5 minutes
(typical), 30 seconds (getting there), 15 seconds (better), and 5
millisecond (very good) samples. The smaller the time sample the more
accurate the reading.
MRTG and SNMP polling every 5 minutes have always been extremely inaccurate
about the utilization and nature of bursty traffic. Think about this;
Voice, Citrix, market data apps, and basically every QoS application can be
disrupted by the latency caused by congestion and other bursty apps.
These periods of congestion last for milliseconds; how can sampling every 5
minutes, (or large sample times) tell you this? The answer is that it can
not, and any burst that is seen within these sample times is seen as a
smooth rise or fall. There is no visibility into the nature of the traffic
and any violation of QoS targets (jitter and delay, packet loss).
When sampling BW and application traffic; the sample time needs to be as low
as possible in order to show the true utilization levels and nature / affect
of bursty traffic.
HTH Ali Al-Sayyed / GS folk --> Kindest Regards all and have a great
weekend,
Andrew Lee Lissitz
908.303.4762
www.corvil.com
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of
David Heaton
Sent: Saturday, February 26, 2005 6:34 AM
To: ali; ccielab
Subject: RE: ATM subinterface - and PCR/SCR :: Burst/CIR FrATM mapping?
Using MRTG is one way to measure the util on a subif...
Also, say you have a 128K frame access with 64K CIR
that comes in via a FrATM link from your carrier,
should you configure the PCR to be 128K and the SCR to be 64K
on your ATM subinterface
or, should you configure PCR 128 and SCR 128, and just
the the carrier's ATM switch handle the flow/queuing?
Any negative implications of doing this?
Regards
David
-----Original Message-----
From: nobody@groupstudy.com [mailto:nobody@groupstudy.com] On Behalf Of
ali
Sent: Monday, 14 February 2005 4:47 PM
To: ccielab
Subject: ATM subinterface
Dear All
How we can fin the current utilization bandwidth in the ATM sub
interface,,
did there is any way
Ali Al-Sayyed
CCIE #14265
This archive was generated by hypermail 2.1.4 : Thu Mar 03 2005 - 08:51:26 GMT-3