|Read the Digest in
You need the free
Thanks to This Month's Availability Digest Sponsor
In this issue:
Browse through our useful links.
See our article archive for complete articles.
Sign up for your free subscription.
Visit our Continuous Availability Forum.
Check out our seminars.
Check out our writing services.
Is It Safe to Use the Cloud for Your Critical Applications?
The use of cloud computing for enterprise applications is rapidly gaining in popularity. Companies can avoid the expense of computer systems, datacenters, and IT staff by moving their applications to cloud providers. Cloud computing lets them “pay-as-you-go” for only the resources their applications use.
However, cloud computing has yet to prove that it can ensure the availability required by critical applications. Cloud providers struggle to achieve three nines of availability. For an application that must be up 100% of the time, eight hours of downtime per year is simply unacceptable.
A mixed approach is appropriate. Critical applications can be run on in-house, fault-tolerant systems; and less critical applications can be moved to the cloud. However, often these applications must interact.
In this issue’s article entitled “Adding High Availability to the Cloud,” guest author Paul Holenstein introduces methods using data replication to achieve interoperability between in-house applications and cloud-based applications. Using these techniques, critical applications can continue to achieve five nines or more of availability; and the enterprise can benefit to a great extent from the cost savings of cloud computing. The pros and cons of cloud computing are covered extensively in our seminars on “High Availability Theory and Practice.”
Dr. Bill Highleyman, Managing Editor
On Tuesday, August 12, 2014, the Internet slowed to a crawl for many web sites. For some web sites, the speed was simply pathetic. Others became inaccessible.
The reason was that a long-known limit of the Internet had been breached. The number of routes needed to link the major Internet domains exceeded a default limit in many of the routers that provide the interconnection function. These routers crashed or could not deliver full routing functions. Thus, major portions of the Internet came to a halt.
The problem was in the Border Gateway Protocol (BGP) that interconnects the major Internet domains. Early BGP routers were delivered with a default configuration for handling 512K routes, and this limit was rapidly approaching. During a mundane maintenance function, Verizon launched 15,000 new routes into the Internet, causing the limit to be exceeded. Thousands of routers around the world crashed, and the Internet was brought to its knees.
The BGP addressing limitation has been understood for a long time, and the fact that the Internet was nearing this limit has also been known. However, system administrators have been slow to take corrective action. The problem finally caught up with them.
Companies are moving more IT services to public clouds to take advantage of the economy and flexibility of cloud computing. There are no initial expenditures for equipment, datacenter space, or staff. A company must only cover the costs of the computation, storage, and communication resources that it uses.
However, there remains a reluctance to move critical applications to the public cloud. Notable instances of public-cloud failures and data loss have been frequently reported. A hybrid approach that assigns critical processing to highly available private systems such as HP NonStop servers and noncritical processing to the public cloud is a concept that is gaining momentum. Data replication plays an important role in this approach by serving as the ‘glue that binds’ virtual machines running in the public cloud to highly available private servers handling the critical roles.
The Shadowbase data replication engine is an example of a tool that can implement this function. A company’s critical applications can run on trusted corporate internal systems while allowing the cloud to perform the less critical functions for which the cloud is well-suited. This approach enables businesses to take advantage of the benefits of the cloud while avoiding the pitfalls.
OpenVMS, developed by Digital Equipment Corporation and acquired by HP, began life as VAX/VMS in 1977. It has been running mission-critical systems for over three decades.
In June, 2013, HP released its updated roadmap for OpenVMS. To the consternation of OpenVMS users, HP announced a schedule for the end of support for OpenVMS. The current version of OpenVMS, version 8.4, would be supported on HP Integrity i2 systems at least until 2020; and mature product support would be extended through at least 2025. Moreover, OpenVMS would not be migrated to HP Integrity i4 systems using the Poulson eight-core Itanium chips.
HP has now corrected this situation. OpenVMS will be ported to new HP systems and will be supported indefinitely. HP accomplished this turnaround by completing a perpetual and exclusive licensing agreement with VMS Software, Inc. (VSI) to extend indefinitely the lifespan of OpenVMS.
After a year of anguish from HP’s first announcement of the OpenVMS sunset, HP has now made a U-turn. It has guaranteed the continuing usefulness of one of its most revered operating systems. With this assurance in mind, customers now can continue to implement their mission-critical environments with OpenVMS on current and future HP server generations.
Replicate is a powerful data replication engine that can be used to synchronize homogeneous and heterogeneous databases. The databases may be relational or non-relational. Powerful transformation facilities support the conversion of source database formats to those of the target database. All DML and DDL changes can be replicated with latencies measured in seconds or subseconds.
The initial target database can be created and loaded without having to pause the source applications. Web-service GUI consoles are supplied to initially define and deploy the replication channel and then to monitor and manage it.
Replicate Replication Server appliances can be multithreaded to provide scalability. Multiple Replication Servers can be configured to meet any capacity requirements and can provide redundancy to avoid single points of failure in the replication channel.
Several topologies are supported, including active/active systems for continuous availability. Replicate supports the integration of on-premise systems to cloud services.
Replicate is currently deployed to offload production systems by moving process-intensive functions to other systems, to provide disaster recovery to remote data centers, and to feed data warehouses, data marts, and Extract, Transform, and Load utilities, among many other uses.
A challenge every issue for the Availability Digest is to determine which of the many availability topics out there win coveted status as Digest articles. We always regret not focusing our attention on the topics we bypass.
Now with our Twitter presence, we don’t have to feel guilty. This article highlights some of the @availabilitydig tweets that made headlines in recent days.
Sign up for your free subscription at https://availabilitydigest.com/signups.htm
Would You Like to Sign Up for the Free Digest by Fax?
Simply print out the following form, fill it in, and fax it to:
+1 908 459 5543
The Availability Digest is published monthly. It may be distributed freely. Please pass it on to an associate.
Managing Editor - Dr. Bill Highleyman email@example.com.
© 2014 Sombers Associates, Inc., and W. H. Highleyman