From: Howard C. Berkowitz (hcb@gettcomm.com)
Date: Wed Jan 21 2004 - 11:34:21 GMT-3
At 10:27 PM -0600 1/20/04, Richard Danu wrote:
>Yes, you are right about connectivity being the key in a successful
>migration. We have static, "clunky" DNS servers what have very old/stale
>records. It is in working progress as far as cleanup; I however was
>interested in creative, out-of-the-box feedback such as yours.
For something not exactly on topic, but still relevant and nicely
available free online, see both RFC 2071 (coauthored with Paul
Ferguson on why you want to clean up addressing, somewhat targeted to
convincing top management that it's a wise thing to do with respect
to life cycle cost) and my own RFC 2072, the "Router Renumbering
Guide". The latter does have a fair bit of technique that refers
specifically to the router part of a readdressing/migration, although
it's a few years old.
>
>I will definitely read your book. "Strategic" and "Wholesome" approach is
>what I envisioned as opposed to a simple solution of migrating an existing
>questionable infrastructure, I inherited.
>
>Yes, we have some server failover, clustering and load balancing, yet I have
>learned since my arrival, that in a pool of over 80 application servers,
>most of them are paired to share 50/50 even 33/33/33 of the load. Single
>system failure in a pair requires immediate manual intervention and
>"resuscitation".
*sigh* I had a healthcare client that insisted on a network that
"could not fail". So, I responded with dual SONET local loops to one
national carrier, a physically separate T1 on copper to a different
ISP with a different major national uplink, fully redundant 7200s,
etc.
Other people actually installed the network. On a later visit, I
went into the computer room, and found the server. Not servers, not
clusters. Innocently inquiring what they planned to do if the server
went down, I was told that everything was OK because they ran a
transaction tape and nightly backups. No hot standby server cluster.
No warm standby backup server in the computer room. No offsite data
center. Not even an examination of development servers to see if one
could be pre-empted for production.
Sadly more common than one thinks, lots of top managers fixate on
servers or on networks, and ignore the relationships.
>
>I am looking for the best possible solution on server/network architecture
>performance and redundancy, so I can rest easy!
>
>:)
>
>Thank you Howard!
>-- Richard
>
>-----Original Message-----
>From: nobody@groupstudy.com [mailto:nobody@groupstudy.com]On Behalf Of
>Howard C. Berkowitz
>Sent: Tuesday, January 20, 2004 1:30 PM
>To: ccielab@groupstudy.com
>Subject: Re: Datacenter Migration Question
>
>
>At 2:01 PM -0500 1/20/04, rdanu wrote:
>>I am trying to put together the pieces of a migration puzzle. The
>>company I work for is thinking about a possible migration of their
>>data center. I thought about several options but Id like to see
>>what some of you seasoned professionals might advise. A brief
>>scenario is described below, if you have any questions feel free to
>>ask. I appreciate any feedback!
>>
>>Current setup:
>>Location A contains 6 production VLANS, and customers access
>>everything via a high speed (OC-3) Internet connection.
>>
>>Migration scenario:
>>
>>The goal is to transition all the servers from Location A to
>>location B. The distance amongst locations is approximately 100
>>miles. No down time is allowed.
>
>You know, this "no down time" requirement seems to come up a lot. At
>least here, it always seems to refer to the network. What about
>server downtime? Are there server failover mechanisms now in place?
>
>If there are, one quite rational strategy may be to implement the
>cutover exploiting both the network and the servers. Keep customers
>using the local server, move its backup, and then flip the
>primary/secondary relationship. You might get suboptimal server load
>balancing during the transition, but you aren't likely to have flat
>downtime.
>
>>
>>Servers at Location A have subnet/VLAN information 10.1.1.0,
>>10.1.2.0, 10.1.3.0. 10.1.6.0.
>
>Again in the interest of reliability and flexibility, I'd make a very
>strong case to have the servers accessed by DNS name, not IP address.
>Client addresses should come from DHCP. Yes, I know there are
>managements that don't want to change anything on the host side, but
>they are spiting themselves in maintenance cost and availability of
>failover.
>
>I don't immediately see why, if you have DHCP-DNS, you can't have
>different VLANs and host addressing. Things like VRRP very nicely
>make things transparent on the host side. If you are using
>third-party load balancers, they may be able to do much of the work.
>
>>
>>A private OC-3 will connect Location A to Location B.
>>
>>Servers will be moved to Location B in batches of 10. There are
>>approximately 150 servers in total.
>>
>>The objective is, to keep all servers working together and allow
>>them to co-exist on the same VLANS, at both locations.
>
>Why is having the same VLAN so important, as long as you have exactly
>the functional connectivity you need? If the servers are equivalent
>and are simply load shared, a VRRP virtual address should give you
>the same result. If the servers are on a separate subnet from the
>client, routing can be very flexible in getting you to the right
>place. If the servers have multiple NICs, VLAN capable NICs, or the
>ability to support multiple IP addresses, you have lots more options.
>
>Are you getting the idea that this shouldn't be looked at as a pure
>network problem? Modestly, I will mention my own _WAN Survival
>Guide_ (Wiley), with special reference to the parts on server
>failover in distributed environments. You'll find I have an
>extensive system for categorizing server and server cluster behavior
>-- you really need to do this sort of definition before you can
>design the proper network solution.
>
>>
>>Internet access will continue to be provided through Location A,
>>until all servers have been migrated to Location B.
>>
>>In essence, how could the same exact VLANS (same subnets) co-exist
>>and communicate with each other at both locations via a routed IP
>>network? Is there a method to tunnel VLANS through an IP network?
>
>Several ways, including MPLS and L2TPv3. Whether or not it is a good
>idea is quite another matter.
>
>>
>>Location A contains CAT5500 Equipment. Location B will have CAT3750
>>Cisco equipment, possibly 4500.
>>
>>Example:
>>Out of a pool of 20 Web servers (Load Balanced at Location A from
>>its Internet connection), about 10 will be moved to Location B in
>>the 1st trip. We have to make sure that these moved servers will
>>preserve the same IP address. The requirement is to have the 10
>>Servers at Location A, and the 10 Servers at Location B work
>>together, as if they were next to each other.
>
>_______________________________________________________________________
>Please help support GroupStudy by purchasing your study materials from:
>http://shop.groupstudy.com
>
>Subscription information may be found at:
>http://www.groupstudy.com/list/CCIELab.html
This archive was generated by hypermail 2.1.4 : Mon Feb 02 2004 - 09:07:48 GMT-3