|Read the Digest in
You need the free Adobe
The digest of current topics on Continuous Processing Architectures. More than Business Continuity Planning.
BCP tells you how to recover from the effects of downtime.
CPA tells you how to avoid the effects of downtime.
Thanks to This Month's Availability Digest Sponsor
In this issue:
Browse through our Useful Links.
Check our article archive for complete articles.
Sign up for your free subscription.
Join us on our Continuous Availability Forum.
Remembering Ken Olsen Ė An IT Icon
Just as we were going to press, I learned of the passing of Ken Olsen, an entrepreneur supreme whose vision benefits all of us today.
Ken Olsen was my first boss. As a graduate student working as a research assistant at MITís Lincoln Laboratory in 1957, I was assigned to Ken to work on the first all-transistorized computer ever attempted. I was under Ken when he decided to start his own company using the transistor technology he had developed at Lincoln Labs. The company he founded was Digital Equipment Corporation, which was to become in the late 1980s the second largest computer manufacturer next to IBM.
I kept in touch with Ken for several years after he founded DEC. Throughout his career, he carried his personal style of management that I had the privilege of witnessing. He was nurturing and fiercely loyal to his employees. During one visit, when he was showing me around the old New England mill that was DECís first home, he seemed to know virtually everyone by their first name and on what they were working.
Through the PDP and VAX series of computers, it was Kenís vision of interactivity that was a powerful force in moving computing from centralized mainframes into the hands of people. His legacy will be felt by all of us for a long time to come. Look for a longer tribute to Ken in our next issue.
Dr. Bill Highleyman, Managing Editor
Recently, a major academic medical center lost almost all of its IT services for over three days, threatening the welfare of its patients. Virtually all hospital services, including medical records and clinical analyses, were unavailable. Most disturbing is the unnecessary sequence of events that led to the failure, a failure that was estimated to have cost almost $4 million.
A reboot of the data centerís servers, necessitated by a network fault, was unsuccessful despite several retries. Ultimately, the problem was traced to changes left in the system following the cancellation of a two-year-old unsuccessful project to provide high availability to the systems.
During this two-year period, it turned out that no server reboots were ever required. If reboot procedures had been periodically tested, the problem would have been found in a controlled environment.
Sadly for this medical center, an attempt to provide high availability led to no availability.
Data deduplication is a technology that can reduce disk storage-capacity requirements and replication bandwidth requirements by a factor of 20:1.
This is certainly a very powerful marketing statement, and it is generally accurate. However, data deduplication comes with a lot of ifs, ands, and buts. In this article, we explore what data deduplication is and its many strengths, along with some caveats.
In simple terms, data deduplication is a method in which a specific block of data in a large database is stored only once. If it appears again, only a pointer to the first occurrence of the block is stored. Since pointers are very small compared to the data blocks, significant reductions in the amount of data stored can be achieved.
Deduplication can significantly reduce the amount of disk storage needed for disaster-recovery databases and for near-term archiving of database backups. It can also reduce the network capacity required for replicating nonrelational databases.
Deduplication does not replace disk storage for online data that is needed to support applications, nor does it replace magnetic tape for long-term archival of point-in-time backups. However, used properly, it can significantly reduce the data-center footprint of data-storage subsystems.
Since 1989, the Disaster Recovery Journal (DRJ) has sponsored the semiannual Spring World and Fall World conferences dedicated to Business Continuity and Disaster Recovery (BC/DR). Its 44th conference, Spring World 2011, will be held the week of March 27th at Disneyís Coronado Springs Resort in Orlando, Florida. The body of the four-day agenda (Sunday, March 27th to Wednesday, March 30th) includes nine unopposed general sessions, 24 breakout sessions, and twelve workshops.
Several pre-conference and post-conference courses are also scheduled for an additional fee. Pre-conference courses are held all day on Saturday, March 26th, and Sunday morning, March 27th. Post-conference courses are one- to three day courses held on Wednesday afternoon, all day Thursday, and Friday morning, March 30th to April 1st. Courses and examinations for certain BCP (Business Continuity Planning) certifications are offered following the conference.
DRJís Spring World 2011 conference continues twenty-two years of distinction in the fields of Business Continuity and Disaster Recovery. Lasting for over a week with informational sessions, workshops, certification exam preparation, and qualifying exams, it is the premier educational event for BC/DR professionals.
In a recent Availability Digest article, we discussed a data-center failure experienced by WestHost, a major web-hosting provider, that took down almost 100,000 web sites and email accounts for up to six days. The problem occurred when its data center underwent a standard yearly test of its Inergen fire-suppression system, and a maintenance error caused the system to accidentally fire and trigger the release of a large blast of Inergen gas designed to put out a fire. Surprisingly, this caused many of the disks in the data center to fail.
At the time, no one knew why a release of Inergen gas should have caused such damage to so many hard disks. Subsequent tests by major providers of these systems confirmed that such damage can occur and that it is related to extreme noise generated during the discharge process.
However, these tests also revealed that the noise specifically due to the discharge of the gas is not the primary culprit. The biggest effect on disk drives is caused by the ear-splitting fire alarms that accompany the discharge. There are reasonably simple steps that a data center can take to minimize this damage.
Sign up for your free subscription at https://availabilitydigest.com/signups.htm
Would You Like to Sign Up for the Free Digest by Fax?
Simply print out the following form, fill it in, and fax it to:
+1 908 459 5543
The Availability Digest is published monthly. it may be distributed freely. Please pass it on to an associate.
Managing Editor - Dr. Bill Highleyman email@example.com.
© 2011 Sombers Associates, Inc., and W. H. Highleyman