EETimes

Embedded Systems November 2000 Vol13_12

Issue link: http://dc.ee.ubm-us.com/i/71849

Contents of this Issue

Navigation

Page 31 of 189

For most real software systems, complete and exhaustive testing is an intractable problem. For these reasons, tremendous value and power lie in using software to control embedded devices. Still, we need to clearly understand the risks. By understanding the nanu·e of soft- ware, we may more effectively build embedded conu·ol software while min- imizing the risks. In his article, Brooks states that software has "essential" properties as well as "accidental." [ 1] The essential properties are inhe rent, and in a sense, unremovable or unsolvable. They represent the nature of the beast. The accidental properties are coincidental, perhaps just the result of an immature field . The accidental properties are those that might be solved over time. The following sec- tions identify some of the essential properties of software.2 In order to build safe software, each of these must be dealt with to minimize risk. Complexity. Software is generally more complex than hardware. The most complex hardware tends to take the form of general purpose micro- processors. The variety of software that can be written for these hardware sys- tems is almost limitless, and the com- plexity of such systems can dwarf the complexity of the hardware system on which it depends. Consider that soft- ware systems consist not only of pro- grams (which may have an infinite number of possible execution patl1s), but also data, which may be many orders of magnitude greater tl1an tl1e hardware states present in our most complex integrated circuits. The most complex hardware takes the form of ASICs (application-specif- ic integrated circuits), but these are essentially general purpose micro- processors with accompanying system- specific control software. In such cases, it's still common for the com- plexity of the oftware to dwarf that of the hardware. Error sensitivity. Software can be exu·emely sensitive to small errors. It has been said that if architects built houses the way software engineers built software, the first woodpecker that came along would destroy civi- lization. WhiJe tl1e story hurts, it's part of the nature of software that small errors can have huge impacts. In other fields, there is a notion of "tol- erance." For example, some play typi- cally exists in the acceptable range of tensile strength of a mechan ical part. There's li ttle in the way of an analo- gous concept in software." There's no concept that the software is still fit if some small percentage of the bits change. In some situations the change of a single bit in a program can mean the difference between suc- cessful execution and catastrophic failure. Difficult to test. For most real software systems, complete and exhaustive test- ing is an inu·actable problem. A pro- gram consisting of only a few hundred lines of code may require an infinite amount of testing to exhaustively cover all possible cases. Consider a sin- gle loop that waits for a key press. What happens if the user presses dur- ing the first loop? The second? The tl1ird? One can argue that all subse- quent iterations of tl1at loop are part of an equivalence class, and the argu- ment would probably be valid. But what if something catasu·ophic occurs only if the key is pressed during the one milliontl1 time through? Testing isn't going to discover that until the millionth test case. Not likely to hap- pen. AJI testing deals with r isk man- agement, and all testers tmderstand the near impossibili ty of exh austive testing. And so they deal with equiva- lence classes based upon assumptions of continuous functions. But when functions suddenly show tl1emselves to be non-continuous (such as the 30 NOVEMBER 2000 Embedded Systems Programming Pentium floating-point bug), you still have a problem. Correlated failures. Finding the root cause of fai lures can be extremely challenging witl1 software. Mechanical engineers (and even elecuical engi- neers) are often concerned witl1 man- ufacturing failures, and the rates and conditions that lead things to wear out. But software doesn't really wear out. The bits don't get weak and break. It is true tl1at certain systems can become cluttered with incidental detritt1s (think Windows 9x), but tl1ey don 't wear out in the same way a switch 01 · a hard drive will. Most of tl1e failures in software are actually design errors. One can attempt to avoid these failures with redundant systems, but those systems simply duplicate the same design error, which doesn't help much. One can also attempt to avoid tl1ese failures by employing competing designs of a system, but the backup may suffer from tl1e same blind spots as the original, despite the fresh design. Or even more pernicious, the backup may suffer from new and cre- ative blind spots, different from the first, but equally harmful. Lack of professional standards. Software engineering is ve1·y much a fledgling field. Individuals who once proudly proclaimed themselves to be "computer programmers" are now typ- ically mildly insulted at tl1e notion of being only a "programmer" and now tend to prefer to be called "software engineers." But tl1ere are really few, if any, "software engineers" in practice. There are no objective standards for the engineering of software, nor is there any accrediting agency for licensing professional software engi- neers. In a sense, any programmer can call himself a software engineer, and there's no objective way to argue with that. Steve McConnell argues for tl1e creation of a true discipline of soft- ware engineering. [3] Given our increasing dependency on the work of these "engineers" for our lives and

Articles in this issue

Archives of this issue

view archives of EETimes - Embedded Systems November 2000 Vol13_12