|Read the Digest in
You need the free
Thanks to This Month's Availability Digest Sponsor
In this issue:
Browse through our useful links.
See our article archive for complete articles.
Sign up for your free subscription.
Visit our Continuous Availability Forum.
Check out our seminars.
Check out our writing services.
Are Public Clouds Ready for Prime Time?
Public clouds are attracting more and more applications from companies that wish to eliminate the cost and complexity of running their own data centers. So far, the bulk of the applications that have been moved to the cloud are non-critical, as companies wait for the cloud infrastructure to mature and grow reliable.
Many cloud providers have established multiple data centers, each fault-isolated from the others, to provide availability zones that can be used to replicate applications and databases to protect against failures. In this way, an application running in a zone that has suffered an outage can be moved rapidly to another functioning zone.
Increasingly, companies are moving critical applications to the cloud. Are public clouds ready for applications that just cannot go down? Unfortunately, the jury is still out on this question. A case in point is the recent failure of the Microsoft Azure cloud, described in this issue of the Availability Digest. A faulty upgrade and an improper deployment took down the majority of Azure’s global availability zones for half a day. As we discuss in our seminars, companies must be prepared to continue critical operations in the event of a cloud failure.
Dr. Bill Highleyman, Managing Editor
Public clouds are gaining increasing acceptance by both large and small companies to host their applications. Cloud computing is certainly acceptable for ordinary applications. But is it reliable enough to host a company’s mission-critical applications? Research by Infosys suggests that about four in five enterprises plan to move their mission-critical workloads into the public cloud.
However, there continue to be examples of massive cloud failures that have taken down applications for hours and in some cases even days. If a company decides to host a mission-critical application in a public cloud, it must have plans as to how to continue to offer the services of the application should the cloud fail.
A case in point is the recent global failure of Microsoft’s Azure cloud, a failure that lasted for eleven hours. The Azure outage was caused by two factors - a faulty upgrade to its Azure Storage services and an improper deployment to all regions simultaneously. This outage adds to a string of Azure outages over the last few years.
In the last three years, Microsoft Azure outages have eclipsed its three-9s SLA. Azure’s availability history certainly raises serious questions about the use of public clouds for mission-critical applications at this time.
During the last months of 2013, Target, the third largest retailer in the U.S., suffered a card-skimming attack in which hackers were able to obtain the magnetic-stripe data off of cards used in Target stores. Stolen was the personal cardholder data from 110 million payment cards. Thousands of fraudulent transactions followed. Is there a defense against these data breaches?
The answer is yes – smart cards. A smart card, also called a chip card or an integrated-circuit card (ICC), includes an embedded computer chip that employs cryptographic and risk-management features. In conjunction with a smart-card POS or ATM terminal, these features are designed to thwart skimming, card-cloning, card-counterfeiting, and other fraudulent attacks.
Smart cards have been in use all around the world except in the U.S. That is about to change. In this article, split over two issues of the Availability Digest, the significant security that smart cards add to payment-card transactions is described. Part 1, published in the November 2014, issue of the Availability Digest, covered the methods for authorizing smart-card transactions online with the issuer. In Part 2, we discuss the procedures for securely authorizing smart-card transactions offline without direct issuer involvement.
Service Level Agreements (SLAs) usually include a limit on the amount of downtime that is tolerable for an application. Each application typically has its own SLA requirements.
If an SLA requirement for one or more critical applications exceeds the availability of a single system, the solution is often to configure a second system as a backup. In this way, should the active production system fail, critical applications can be failed over to the backup system and continue in operation.
However, recovery to the backup system takes time; and failovers can fail. When these factors are included in the availability analysis, will an active/backup system configuration meet the higher availability requirements for a mission-critical application?
In this article, we explore the above question. The expected amount of downtime during failover is calculated for a typical active/backup configuration. It is shown that, indeed, the availability of the applications is significantly increased. However, it is also demonstrated that if an application availability in excess of five 9s is required, an active/active architecture should be considered. Active/active systems minimize recovery times and eliminate failover faults. There are many examples of active/active systems that have been in service for decades without an outage.
Nothing strikes fear in the heart of a corporate executive as much as the loss of some or all of his company’s business data, whether it is via a hard-disk failure, a crashed RAID array, or data tucked “safely” away in the cloud. Fortunately, to offer help in these situations are many companies that strive to recover data from damaged media. One of the largest of these companies is Ace Data Recovery.
Ace will attempt to recover data from hard disks, RAID arrays, solid-state drives (SSDs), tape, mobile devices, cloud environments, Microsoft Exchange, and Microsoft SQL, among others.
Ace has locations all over the United States. Four of these locations (Dallas/Fort Worth, Texas; Houston, Texas; Falls Church, Virginia; and Chicago, Illinois) are equipped with state-of-the-art clean rooms, an imperative for recovering data from hard drives, as a tiny particle under the disk head can scratch the disk.
Ace maintains twenty-five other U.S. locations that can service local customers. These locations arrange for damaged media to be shipped to a location with a clean room if necessary. Ace will provide a fixed price for data recovery, but this price will be waived if Ace is unsuccessful at recovering any data.
A challenge every issue for the Availability Digest is to determine which of the many availability topics out there win coveted status as Digest articles. We always regret not focusing our attention on the topics we bypass.
Now with our Twitter presence, we don’t have to feel guilty. This article highlights some of the @availabilitydig tweets that made headlines in recent days.
Sign up for your free subscription at https://availabilitydigest.com/signups.htm
Would You Like to Sign Up for the Free Digest by Fax?
Simply print out the following form, fill it in, and fax it to:
+1 908 459 5543
The Availability Digest is published monthly. It may be distributed freely. Please pass it on to an associate.
Managing Editor - Dr. Bill Highleyman email@example.com.
© 2014 Sombers Associates, Inc., and W. H. Highleyman