|Read the Digest in
You need the free
In this issue:
Browse through our Useful Links.
See our article archive for complete articles.
Sign up for your free subscription.
Visit our Continuous Availability Forum.
Check out our seminars.
Check out our writing services.
We Replay Our Tweets for Our Subscribers
Too many Never Again stories! That is our dilemma. We usually include a major one in each issue. We then summarize several others in our “More Never Agains” articles about twice a year. However, that practice ignores hundreds of outage stories, each of which can help educate us as to how we can do things better when it comes to keeping our systems up and running.
To bridge this gap, a few months ago we started tweeting stories that deal in high-availability issues. We tweet two or three dozen stories a month, more than we can publish in the Digest. Since we currently have many more subscribers than tweet followers, we decided to publish a brief summary of our tweets periodically so that everyone can benefit from them. You can find these summaries in our article entitled “@availabilitydig – The Twitter Feed of Outages.” We hope they prove useful to you.
Dr. Bill Highleyman, Managing Editor
In today’s technology with gigabytes of memory packed into small laptop computers having gigahertz of speed, we forget (or never knew) what it was like in the 1960s and 1970s to develop major systems. Let’s go back to those days when memory was measured in kilobytes and processor speed was measured in megahertz, and review what miracles (in today’s perception) were achieved. These systems included racetrack wagering, multi-company payrolls, commodities trading, and even putting men on the moon.
Think about what it would take to do this now.
In our current high technology, it is difficult to understand how complex, real-time systems requiring continuous availability could be built with such small memories and slow processors. These case studies reflect the brilliance of early computer professionals, who built major systems with such minimum resources – a brilliance that has led to today’s plethora of computing capabilities.
Researchers at the Georgia Institute of Technology have discovered an unlikely back door into Apple devices. They demonstrated that they can easily build an Apple device charger that will infect an iPhone or an iPad. Apple has quickly responded with an upgrade to close the security flaw.
The key to compromising an iPhone or an iPad is the fact that such devices are charged through a USB port. The USB port supplies not only a provision for charging the internal batteries of the device but also provides a gateway to the device’s operating system and applications. This is, of course, the primary purpose of the USB port – to offer (presumably secure) access to the iOS internals for external devices.
The researchers used the USB portal into the devices to infect them within sixty seconds of being plugged in. However, malicious chargers do not seem to present an immediate danger. They are too large to be packaged to look like a standard iDevice charger anytime soon, and they are a threat only to the iDevices into which they are plugged and only if those devices are unlocked.
Hardware reliability can be defined as:
Reliability. The ability of an item to perform a required function under given conditions for a given time period.
A similar definition is applied to software reliability - performing the function it was designed to do over a period of time. The inclusion of time in the definition implies that there will be a failure of some sort or other at some time or other. As a result, most math functions relating to software reliability have a time element in them in a similar way as do hardware models.
A number of models, often with esoteric math, attempt to describe the development and subsequent reliability of a piece of software. Two classes of models are described, although different authors break down the models in different ways:
· The Software Reliability Growth Model (SRGM) uses the time between failures as its working entity.
· The Defect Density (DD) model, or defect density prediction model, uses fault count or failure intensity as its working entity.
The latter model is the subject of this discussion by our guest author Dr. Terry Critchley.
To handle the complex mathematics of software reliability, many different companies sell reliability prediction software packages; and there are many different reliability prediction methodologies, handbooks and guidelines.
A challenge every issue for the Availability Digest is to determine which of the many availability topics out there win coveted status as Digest articles. We always regret not focusing our attention on the topics we bypass.
With our new Twitter presence, we don’t have to feel guilty. This article highlights some of the @availabilitydig tweets that made headlines in recent days.
Sign up for your free subscription at https://availabilitydigest.com/signups.htm
Would You Like to Sign Up for the Free Digest by Fax?
Simply print out the following form, fill it in, and fax it to:
+1 908 459 5543
The Availability Digest is published monthly. It may be distributed freely. Please pass it on to an associate.
Managing Editor - Dr. Bill Highleyman email@example.com.
© 2013 Sombers Associates, Inc., and W. H. Highleyman