Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

AN OVERVIEW OF SOFTWARE ENGINEERING, Summaries of Software Engineering

This article examines the similarities and differences between software and hardware development, the essence of software, modern practices used to support the ...

Typology: Summaries

2022/2023

Uploaded on 05/11/2023

zeb
zeb 🇺🇸

4.6

(24)

6 documents

1 / 12

Toggle sidebar

Related documents


Partial preview of the text

Download AN OVERVIEW OF SOFTWARE ENGINEERING and more Summaries Software Engineering in PDF only on Docsity! BRUCE I. BLUM and THOMAS P. SLEIGHT AN OVERVIEW OF SOFTWARE ENGINEERING Computer software has become an important component of our defense systems and our everyday lives, but software development is both difficult and costly. This article examines the similarities and differences between software and hardware development, the essence of software, modern practices used to support the software process, and the application of government methods. We also consider the role of standards and technology centers, and conclude with a view into the future. INTRODUCTION Software is a part of everyday life at work and at home. Many things we take for granted are software de­ pendent: watches, telephone switches, air-conditioning/ heating thermostats, airline reservations, systems that de­ fend our country, financial spreadsheets. The discipline of managing the development and lifetime evolution of this software is called software engineering. Software costs in the United States totaled about $70 billion in 1985, of which $11 billion was spent by the Department of Defense. 1 Worldwide, spending was about twice that amount-$l40 billion. At a growth rate of 12070 per year, the United States will spend almost $0.5 trillion annually on software by the turn of the cen­ tury. Studies in the early 1970s projected that software would rapidly become the dominant component in com­ puter systems costs (Fig. 1). The cost of computing hard­ ware over the last few years has fallen dramatically on a per-unit performance basis. That decrease resulted pri­ marily from the mass production of denser integrated circuits. Software remains labor intensive, and no com­ parable breakthrough has occurred. Thus, the small in­ creases in software productivity have not overcome the increased cost of human resources. There is broad agreement on what is to be avoided but a diversity of opinions regarding the best way to de­ velop and maintain software. We will examine here why software development is so difficult, what methods are currently available to guide the process, how government methods have responded to those difficulties, and what roads to improvement are being explored. This article, oriented to a technical audience with minimal back­ ground in software development, presents a survey of many different methods and tools, for that is the na­ ture of the state of the art in software engineering. THE ESSENCE OF SOFTWARE DEVELOPMENT The software process, sometimes called the software life cycle, includes all activities related to the life of a software product, from the time of initial concept until final retirement. Because the software product is gener­ ally part of some larger system that includes hardware, 276 U5 0 u '0 E <lJ 2 <lJ a... 100~---------------r---------------' Hardware 80 development/maintenance 60 40 20 0 1955 1970 Year Software maintenance Figure 1-Hardware-software cost trends. 1985 people, and operating procedures, the software process is a subset of system engineering. There are two dimensions to the software process. The first concerns the activities required to produce a product that reliably meets intended needs. The major consider­ ations are what the software product is to do and how it should be implemented. The second dimension ad­ dresses the management issues of schedule status, cost, and the quality of the software deliverables. In a large system development effort, we commonly find the same management tools for both the hardware and software components. These typically are organized as a sequence of steps and are displayed in a "waterfall" diagram. Each step must be complete and verified or validated before the next step can begin; feedback loops to earlier steps are included. A typical sequence is shown in Fig. 2 for software development. The steps are de­ rived from the hardware development model. In fact, only two labels have been changed to reflect the differ­ ences in the product under development: software cod­ ing and debugging is similar to hardware fabrication, and software module testing is similar to hardware com­ ponent testing. fohns Hopkins APL Technical Digest, Volume 9, Number 3 (1988) Analysis 1- Analysis of _ '- functions Detailed design Code and I- - debug '-- Module test 1- Integration 1_ - test - System test - " if Operations - and maintenance Figure 2- Typical software development steps. This structural similarity in the flow facilitates the coordination and management of hardware and software activities. There are, however, major differences between hardware and software: 1. Hardware engineering has a long history, with physical models that provide a foundation for de­ cision making and handbooks that offer guidance. But software engineering is new; as its name im­ plies, it relies on "soft" models of reality. 2. Hardware normally deals with mUltiple copies. Thus, the effort to control design decisions and as­ sociated documentation can be prorated over the many copies produced. In fact, it is common to reengineer a prototype to include design corrections and reduce manufacturing (i.e., replication) costs. Conversely, software entails negligible reproduction cost; what is delivered is the final evolution of the prototype. 3. Production hardware is expensive to modify. There is, consequently, a major incentive to prove the de­ sign before production begins. But software is sim­ ply text; it is very easy to change the physical me­ dia. (Naturally, the verification of a change is a fohn s Hopkins APL Technical Digest, Volume 9, Number 3 (1988) complex process. Its cost is directly proportional to the number of design decisions already made.) 4. Hardware reliability is a measure of how the parts wear out. Software does not wear out; its reliability provides an estimate of the number of undetected errors. These differences suggest that, even with a strong par­ allel between hardware and software, overcommitment to a hardware model may prove detrimental to the soft­ ware process. Some common errors are: 1. Premature formalization of the specification. Be­ cause the design activities cannot begin until the analysis is performed and the specification is com­ plete, there often is a tendency to produce a com­ plete specification before the product needs are un­ derstood fully. This frequently results in an invalid system. Unlike hardware, software can be incre­ mentally developed very effectively. When a prod­ uct is broken down (decomposed) into many small components, with deliveries every few months, the designer can build upon earlier experience, and the final product has fewer errors. Another develop­ ment approach is to use prototypes as one uses breadboard models to test concepts and build un­ derstanding. Of course, only the essence of the pro­ totype is preserved in the specification; its code is discarded. 2. Excessive documentation or control. Software de­ velopment is a problem-solving activity, and docu­ mentation serves many purposes. It establishes a formal mechanism for structuring a solution, com­ municates the current design decisions, and pro­ vides an audit trail for the maintenance process. But documentation demands often go beyond pragmatic needs. The result is a transfer of activity from problem-solving to compliance with external standards, which is counterproductive. 3. The alteration of software requirements to accom­ modate hardware limitations. Since software is relatively easy to change, there is the perception that deficiencies in hardware can be compensated for by changes to the software. From a systems engineering perspective, this strategy obviously is inappropriate. Although it may be the only reason­ able alternative, it clearly represents an undesirable design approach. 4. Emphasis on physical products such as program code. Because code frequently is viewed as a prod­ uct, there is a tendency to place considerable store in it. The most difficult part of software design, however, is the determination of what the code is to implement. In fact, production of the code and its debugging typically take one-half the time of its design. Also, most errors are errors in design and not in writing code. Therefore, managers should not be too concerned with the amount of code produced if the design team has a firm un­ derstanding of how they intend to solve the prob­ lem. And programmers should not be encouraged 277 Blum, Sleight - An Overview of Software Engineering STRUCTURED ANALYSIS DESCRIPTION The figure below contains a simple example of the rep­ resentation used during structured analysis. For this data flow diagram (DFD), we assume that there is a parent DFD, with at least five bubbles or activities. This diagram is an expansion of the bubble, 5.0, Determine Schedule, of the parent activity. Typically a DFD contains five to nine bubbles, although only three are shown. Each bubble is labeled with the ac­ tivity it represents; the data flows to and from each bubble are labeled; and the data stores (i.e., the file containing the work-breakdown-structure [WBS] data) and external ele­ ments (i.e., the printer) are identified with their special sym­ bols. 5.0 DETERMINE SCHEDULE Schedule Request Schedule Examples of composition techniques include the Jack­ son System Design and object-oriented design ("design" implying that the process steps have considerable over­ lap). In the Jackson System Design, the target system is represented as a discrete simulation, and the implemen­ tation is considered a set of communicating sequential processes; that is, the method allows for the modeling of the real-world environment as a computer simulation, which then is transformed into a set of sequential pro­ grams that can operate asynchronously. Conversely, ob­ ject-oriented design first identifies the real-world objects that the desired system must interact with and then con­ siders how those objects interact with each other. There are several versions of object-oriented design, but exper­ ience with its use is limited. Programming in the Large-Design The design process begins after there is a specification establishing what functions the software is to provide. From the discussion of analysis, we see that there is no precise division between analysis (the decision of what is to be done) and design (the determination of how to realize it). There sometimes is a contractual need to es- 280 Because the processing for this DFD is clear, there is no need to expand it to another DFD level. Bubble 5.2, Define Schedule, is described in a minispec, which conveys the pro­ cessing while avoiding the detail required of a programming language. For example, " get and list WBS# and WBS TITLE" is several instructions, and the reenter statement after printing the error message implies a GOTO (not shown). Finally, the data dictionary defines the major elements in the data flow. Here, WBS is a table with five columns, and Task Group is a set of WBS#s. More detailed defini­ tions of the element formats and index schemes may be delayed until additional information is compiled. PROCESS (MINI) SPECIFICATION 5.2 Define Schedule Process for each TASK in TASK GROUP get and list WBS# and WBS TITLE enter START date enter STOP date if START < STOP then print error and reenter end DATA DICTIONARY WBS = WBS# + Title + Start + Stop + Re­ sources Task Group = {WBS#} tablish what the procured software is to provide, so the specification becomes part of the contract that defines the deliverable. In the essential model of the software process, however, there is continuity between analysis and design activities, and the methods often support both activities. The basic process is one of modeling the software sys­ tem and adding details until there is sufficient informa­ tion to convert the design into a realization (i.e., pro­ gram). Design always begins with a specification, which is a product of the analysis step. At times, the specifica­ tion is a formal document establishing a set of require­ ments. Here, it is important to maintain traceability to ensure that all design decisions are derived from a re­ quirement and that all requirements are satisfied in the design (i.e., there are neither extra features nor omis­ sions). At other times (e.g. , in the internal development of a product), the specification is less formal, and addi­ tional subjectivity is needed to determine that the design decisions are valid. For any set of requirements, there are many equally correct designs. The task of the design team is to select among the alternatives those system decisions yielding Johns Hopkin s APL Technical Digest, Volume 9, Number 3 (1988) a design that is, in some way, expected to be better than the others. Studies of this activity indicate that consider­ able domain experience is required. Also, the ability and training of the team members is some two to four times as important as any other factor in determining the cost to produce an acceptable product. Design methods are extensions of analysis methods. For example, decomposition techniques use the DFD, and composition methods span the analysis and programming-in-the-large tasks. With decomposition techniques, "structured design" is used to model the in­ teractions among software modules. Rules are available to guide the transition from DFDs to the "structure di­ agrams" depicting module control flow. As with DFDs, data dictionaries are used to describe the elements in the data flow, and the functions of the modules are detailed as "module specs" in structured English. Other methods begin with models of the data and their temporal changes, and then derive the processes from those data structures. The Jackson Program Design, for example, models the structure of the data and then builds models of the procedures that reflect that structure. For data processing applications, there are several methods used to define the data model. One widely used method is the entity-relationship model. Here, the entities (e.g., employees, departments) and their relationships (e.g., works in) are identified and displayed graphically. Rules then can be applied to convert this conceptual model into a scheme that can be implemented with a database man­ agement system. We have identified here many different (and often mutually incompatible) methods, but the list is incom­ plete. Many of those methods use some form of dia­ gram. Most CASE tools support the DFD, structure dia­ gram, Jackson System Design notation, and entity-rela­ tionship model. There also are proprietary tool sets that are limited to a single method. One of the benefits that any good method provides is a common approach for detailing a solution and communicating design decisions. Thus, for effective communication, an organization should rely on only a limited number of methods. The DFD and the entity-relationship model are the most broadly disseminated and, therefore, frequently will be the most practical for the communication of concepts. Programming in the Small-Coding Code involves the translation of a design document into an effective and correct program. In the 1970s, the concept of "structured programming" was accepted as the standard approach to produce clear and maintainable programs. The structured program relies on three basic constructs: 1. Sequence-a set of statements executed one after the other. 2. Selection-a branching point at which one of a set of alternatives is chosen as the next statement to be executed (e.g., IF and CASE statement). 3. Iteration-a looping construction causing a block of statements to be repeated (e.g., DO statement). fohn s Hopkins APL Technical Digest, Volume 9, N umber 3 (1988) Blum, Sleight - An Overview of Software Engineering Every program can be written using only these three con­ structs. A corollary, therefore, is that the OOTO state­ ment is unnecessary. Structured programming introduced other concepts as well. Programs were limited to about 50 lines (one page of output). Stepwise refinement was used to guide the top-down development of a program. When a concept was encountered during programming that required ex­ pansion, it would be represented as a procedure in the user program and later refined. This method allowed the programmer to defer design activities; it also resulted in programs that were easier to read and understand. To improve comprehension, indentation and white space were used to indicate the program's structure. In time, the flow chart was replaced by the "program design lan­ guage" (e.g., the minispec), which captured the program structure but omitted many program details. Another concept introduced in the late 1970s was "in­ formation hiding," which emerged following analysis of what characteristics should bind together, what functions are retained in a module (cohesion), and how modules should interact with each other (coupling). The goal of information hiding is to yield a logical description of the function that a module is to perform and isolate the users of that module from any knowledge of how that func­ tion is implemented. Thus, the designers may alter the internal implementation of one module without affect­ ing the rest of the program. This concept was refined and became known as the abstract data type. A data type defines what kinds of data can be associated with a variable symbol and what operators can be used with it. For example, most languages offer an integer, real, and character-string data type. The operator plus (+ ) has a different meaning for each data type. With an abstract data type, the designer can specify a new data type (e.g., the matrix) and operators that are valid for that data type (e .g., multiplication, inversion, scalar multiplication). Using the terminology of the Ada 5 programming language, the abstract data type is defined in a package with two parts. The public part in­ cludes a definition of the data type and the basic rules for the operations. The private part details how the oper­ ations are to be implemented. To use the abstract data type, the programmer includes the package by name and then declares the appropriate variables to be that data type. This is an example of software "reuse." The data type operations are defined once and encapsulated for reuse throughout the software application, thereby re­ ducing the volume of the end product and clarifying its operation. Another technique to improve program quality is em­ bodied in the concept of "proof of correctness," mean­ ing that the resulting program is correct with respect to its specification. There are some experimental systems that can prove a program to be formally correct. Such systems have been used to verify key software products, such as a security kernel in an operating system. But proof of correctness usually is applied as a less formal design discipline. "Fourth generation languages" (40Ls) represent an­ other approach to software development. Here, special 281 Blum, Sleight - An Overview of Software Engineering tools have been developed for a specific class of appli­ cation (information processing) that facilitate the devel­ opment of programs at a very high level. For example, one can produce a report simply by describing the con­ tent and format of the desired output; one does not have to describe procedurally how it should be implemented. (Thus, 4GLs generally are described as being nonproce­ dural or declarative.) Validation and Verification In the traditional descriptive flow for software devel­ opment, the activity that precedes operations and main­ tenance is called "test." Testing is the process of detect­ ing errors. A good test discovers a previously undetected error. Thus, testing is related to defect removal; it can begin only when some part of the product is completed and there are defects to be removed. The validation and verification activity includes the process of testing. But it begins well before there is a product to be tested and involves more than the identi­ fication of defects. Validation comes from the Latin vali­ dus, meaning strength or worth. It is a process of predict­ ing how well the software product will correspond to the needs of the environment (i.e., will it be the right system?). Verification comes from the Latin verus, meaning truth. It determines the correctness of a product with respect to its specification (i.e., is the system right?). Validation is performed at two levels. During the anal­ ysis step, validation supplies the feedback to review de­ cisions about the potential system. Recall that analysis requires domain understanding and subjective decisions. The domain knowledge is used to eliminate improper decisions and to suggest feasible alternatives. The rank­ ing of those alternatives relies on the analysts' experience and judgment. The review of these decisions is a cogni­ tive (rather than a logically formal) activity. There is no concept of formal correctness; in fact, the software's va­ lidity can be established only after it is in place. (Proto­ types and the spiral model both are designed to deal with the analyst's inability to define a valid specification.) The second level of validation involves decisions made within the context of the specification produced by the analysis activity. This specification describes what func­ tions should be supported by the software product (i.e., its behavior). The specification also establishes nonfunc­ tional requirements, such as processing time constraints and storage limitations. The product's behavior can be described formally; in fact, the program code is the most complete expression of that formal statement. Nonfunc­ tional requirements, however, can be demonstrated only when the product is complete. Validation and verification are independent concepts. A product may be correct with respect to the contractu­ al specification, but it may not be perceived as a useful product. Conversely, a product may correspond to the environment's needs even though it deviates from its specified behavior. Also, validation always relies on judgment, but verification can be formalized. Finally, both validation and verification can be practiced before there is code to be tested; failure to exercise quality con­ trol early in the development process will result in the 282 mUltiplication of early errors and a relatively high cost of correction per defect. Before a formal specification exists (one that can be subjected to logical analysis), the primary method for both verification and validation is the review. In the soft­ ware domain, this is sometimes called a walk-through or inspection, which frequently includes the review of both design documents and preliminary code. The review process is intended to identify errors and misunderstand­ ings. There also are management reviews that establish decision points before continuing with the next devel­ opment step. The two types of reviews have different functions, and they should not be confused or combined. Management reviews should occur after walk -throughs have been completed and technical issues resolved. Most software tests are designed to detect errors, which sometimes can be identified by examining the pro­ gram text. The tools that review the text are called "static analyzers." Some errors they can detect (such as iden­ tifying blocks of code that cannot be reached) can be recognized by compilers. Other forms of analysis rely on specialized, stand-alone software tools. "Dynamic analysis" tests, concerned with how the program oper­ ates, are divided into two categories. "White box" tests are designed to exercise the program as implemented. The assumption is that the errors are random; each path of the program, therefore, should be exercised at least once to uncover problems such as the use of the wrong variable or predicate. "Black box" tests evaluate only the function of the program, independent of its im­ plementation. As with equipment testing, software testing is organ­ ized into levels. Each program is debugged and tested by the individual programmer. This is called unit test­ ing. Individual programs next are integrated and tested as larger components, which are then function tested to certify that they provide the necessary features. Finally, the full system is tested in an operational setting, and a decision is made to deploy (or use) the product. Natu­ rally, if the software is part of an embedded system, then, at some level, the software tests are integrated with the hardware tests. Management We have so far emphasized the essential features of software development; that is, what makes the devel­ opment process unique for this category of product. Some characteristics of the process make it difficult: the software can be very complex, which introduces the potential for many errors; the process is difficult to mod­ el in terms of physical reality; there is always a strong temptation to accommodate change by modifying the programs; and, finally, the product is always subject to change. (In fact, the lifetime cost for adaptation and en­ hancement of a software product usually exceeds its de­ velopment cost.) The management of a software project is similar to the management of any other technical project. Man­ agers must identify the areas of highest risk and the strategies for reducing that risk; they must plan the se­ quence of project activities and recognize when devia- Johns Hopkins APL Technical Digest, Volume 9, Number 3 (1988) for building requirement and design specifications and related code, synthesizing prototypes, performing dy­ namic assessments, and managing software development projects. At a recent meeting of companies developing and marketing CASE tools, SPC launched an initiative to establish an industrywide consensus on effective tool­ to-tool interface standards. Those standards will repre­ sent the fIrst steps in building an integrated environment. MCC was established by 21 shareholder companies in 1983. The consortium has several research programs, ranging from semiconductor packaging to software tech­ nology. Each program is sponsored by a subset of par­ ticipating shareholder companies. The software technology program focuses on the front end, upstream in the software cycle, where little research has been performed. This program has created a com­ puter-aided software design environment call Leonardo. The environment is to concentrate on requirements cap­ ture, exploration, and early design. Academic research has focused on downstream activities, where formalism and automation are more obvious. MCC's research em­ phasis is on defining and decomposing a large problem into smaller problems and on selecting algorithms, and is geared to teams of professional software engineers working on large, complex systems. Researchers are working on Leonardo architecture, and three compo­ nents: complex design processes, a design information base, and design visualization. The corporation does not consider its research com­ plete until it is put to use by the sponsoring companies. Also, MCC believes it is easier to transfer and establish tools and make them consistent than to transfer method­ ologies and methods. A VIEW TO THE FUTURE We began this article with a review of how software differed from hardware and noted that-once the tech­ nical manager understands the software process-the management of software is much like that of hardware. We then described the software process, characterized more by variety than by clarity and consistency. Despite our signifIcant accomplishments with software, there re­ main conflicting methods, limited formal models, and many unsubstantiated biases. But we present below some significant trends in the field. Formalization Some new paradigms extend the formalism of the programming language into an executable specification. A specification defines the behavior for all implementa­ tions. An executable specifIcation does not exhibit the intended behavior effIciently, but a program is an imple­ mentation of that behavior, optimized for a specifIc com­ puter. We see this because there are systems that we know how to specify exactly, but we do not know how to implement them efficiently. For example, one can specify what a chess-playing program should do without being able to describe an efficient implementation. The hope is that the executable specification will supply a prototype for experimentation that ultimately can be transformed into an effIcient program. But the concept Johns Hopkins APL Technical Digest, Volume 9, Number 3 (1988) Blum, Sleight - An Overview of Software Engineering has not been demonstrated outside the laboratory, and it is not clear how this process model can be managed. Automatic Verification In the discussion of validation and verification, we noted that proofs (verification) could be objective only when the parent specification was clear (formal). There is, therefore, considerable interest in working with mathematically formal specifications at an early stage in the design process, since it then would be possible to prove that each detailing step was correct with respect to this high-level source. A theorem prover could be used to automate the process. Testing then would not be necessary, because no errors would exist. Naturally, vali­ dation still would be required. Automated Tools and Environments There is considerable interest in the use of CASE tools and in program support environments. Unlike the for­ malisms addressed above, most tools and environments are commercial products that implement techniques de­ veloped in the mid-1970s. Thus, these tools and environ­ ments respond to a marketplace demand and provide a means for making the application of current practice more efficient. Their primary benefit is one of reducing the manual effort and thereby the number of errors in­ troduced. As new paradigms are introduced, this focus may limit their use or force changes in the approaches taken. Artificial Intelligence and Knowledge Representation Although there are many definitions of artificial in­ telligence and debates about what it accomplishes, it has had an impact on our perceptions of what computers can do and how to approach problems. The software process is one of representing knowledge about a prob­ lem in a way that facilitates its transformation (detail­ ing) into an implementable solution. Thus, there are many software engineering methods and tools that owe their origins to artifIcial intelligence. Some projects, such as the development of object-oriented programming, have been successful and are available to developers; many others still are in the research stage. One can ex­ pect that a knowledge orientation to the problem of soft­ ware design will have considerable impact. New High-Order Languages The major advances of the 1960s can be attributed to the use of high-order languages, but it is doubtful that current language improvements will have much impact on productivity. Many proven modern programming concepts have been incorporated into Ada, and the com­ mitment to this language clearly will familiarize de- . velopers with those concepts and thereby improve both product quality and productivity. At another extreme, 4GLs offer tools to end users and designers that, for a narrow application domain, yield a tenfold improvement in productivity at a price in performance. Still, neither high-order languages nor 4GLs can match the improve­ ments we are witnessing in hardware cost performance. 285 Blum, Sleight - A n Overview of Software Engineering Software Reuse The concept of software reuse was first perceived in the context of a program library. As new tools have been developed, the goal of reusable components has expand­ ed. For example, Ada packages that encapsulate ab­ stracted code fragments can be shared and reused. The artificial-intelligence-based knowledge perspective also suggests ways to reuse conceptual units having a gran­ ularity finer than code fragments and program libraries. Finally, the extension of 4GL techniques provides a mechanism for reusing application-class conventions with a natural human interface. Training and Domain Specialization All software development requires some domain knowledge. In the early days of computing, the program­ mer's knowledge of the new technology was the key, and the domain specialist explained what was needed. To­ day, almost every recent college graduate knows more about computer science than did those early program­ mers. Thus, there is an emphasis on building applica­ tions. As more tools become available, one can expect software developers to divide into two classes. The soft­ ware engineer will, as the name implies, practice en­ gineering discipline in the development of complex soft­ ware products, such as embedded applications and com­ puter tools for end users. The domain specialists will use those tools together with the domain knowledge to build applications that solve problems in their special environ­ ment. We can see this trend in the difference between Ada and the 4GLs. Ada incorporates powerful features that are not intuitively obvious; the features are built on a knowledge of computer science and must be learned. The 4GLs, however, offer an implementation perspec­ tive that is conceptually close to the end user's view. The software engineer builds the language; the domain spe­ cialist uses it. WHAT OTHERS SAY What do the experts in software engineering say about the future of this discipline and the hope for significant improvements in productivity? In explaining why the Strategic Defense Initiative is beyond the ability of cur­ rent (and near-term) software practice, Parnas 15 offered a negative critique of most research paths. He said that the problem involves complex real-time communication demands, adding that there is limited experience in designing programs of this architecture and magnitude and that there is no way to test the system thoroughly. No ongoing approach, he concluded, could overcome these difficulties. 286 Boehm, I in an article on improving productivity, was more positive. Speaking of state-of-the-art software ap­ plications, he offered this advice: write less code, reduce rework, and reuse software-especially commercially available products, where possible. Brooks 16 discusses the possibility of improving soft­ ware productivity. He has catalogued research directions in some detail and concluded that the biggest payoff would come from buying rather than building, learning by prototyping, building systems incrementally, and­ most important to him-training and rewarding great designers. Of those four recommendations, the first reflects the spinning off of tools that can be used by do­ main specialists, and the next two relate to the need to build up knowledge about an application before it can be implemented. The last of Brooks's positive approach­ es recognizes that software design (like every other crea­ tive activity) depends on, and is limited by, the individu­ al's ability, experience, understanding, and discipline. REFERENCES and NOTES lB. W. Boelun , " Improving Software Productivity," IEEE Computer 20, 43- 57 (1987). 2B. W. Boelun, "A Spiral Model of Software Development and Enhancement, " IEEE Computer 21 , 61-72 (1 988). 3B. W. Boehm, " Industrial Software Metrics Top 10 List," IEEE Software, 84-54 (Sep 1987). 4Two books that are highly recommended are R. Fairley, Software Engineer­ ing Concepts, McGraw-Hill , New York (1985) and R. Pressman, Software Engineering: A Practitioner's Approach, 2nd ed ., McGraw-Hill , New York (1 987). 5 Ada is a registered trademark of the U.S . Government, Ada Joint Project Office. 6DOD-STD-2 167A, "Military Standard Defense System Software Develop­ ment," (29 Feb 1988). 7MIL-STD-1 679 (Navy), "Military Standard Weapon Software Development," (I Dec 1978). 8SEC AVINST 3560.1, "Tactical Digital Systems Documentation Standards," (8 Aug 1974). 90 . F. Sterne, M. E. Schmid, M. J . Gralia, T. A. Grobicki, and R. A. R. Pearce, "Use of Ada for Shipboard Embedded Applications," Annual Wash­ ington Ada Symp., Washington, D.C. (24-26 Mar 1985). lOS. J. Mellor and P . T. Ward, Structured Development for Real-Time Systems, Prent ice-Hall, Englewood Cliffs, N.J. (1986). II R. J. A. Bahr, System Design With Ada, Prentice-Hall, Englewood Cliffs, N.J . (1984). 12G. Tice, "Looking at Standards from the World View," IEEE Software 5, 82 (1988). l3V. G. Sigillito, B. I. Blum, and P . H . Loy, "Software Engineering in The Johns Hopkins University Continuing Professional Programs," 2nd SEI Conf. on Software Engineering Education, Fairfax, Va. (28- 29 Apr 1988). 14 J. Foreman and J . Goodenough, Ada Adoption Handbook: A Program Manager's Guide, CMO/ SEI-87-TR-9, Software Engineering Institute (May 1987). 15 D. L. Parnas, "Aspects of Strategic Defense Systems," Commun. A CM 12, 1326- 1335 (1 985). 16F. P . Brooks, "No Silver Bullet," IEEE Computer 20, 10-19 (1 987). ACKNOWLEDGMENTS-The authors gratefully acknowledge the very helpful suggestions of J . E. Coolahan , M. J . Gralia, R. S. Grossman, and J . G . Palmer. f ohns Hopkins A PL Technical Digest, Volume 9, N umber 3 (1988) THE AUTHORS BRUCE I. BLUM was born in New York City. He holds M.A. degrees in history (Columbia University, 1955) and mathematics (University of Maryland, 1964). In 1962, he joined APL, where he worked as a pro­ grammer in the Computer Center. During 1967- 74, he worked in pri­ vate industry, returning to APL in 1974. His special interests include in­ formation systems, applications of computers to patient care, and soft­ ware engineering. From 1975-83, he served as director of the Clinical In­ formation Systems Division, Depart­ ment of Biomedical Engineering, The Johns Hopkins University. Johns Hopkins APL Technical Digest, Volume 9, Number 3 (1988) Blum, Sleight - An Overview of Software Engineering THOMAS P . SLEIGHT received his Ph.D. from the State University of New York at Buffalo in 1969. Be­ fore joining APL, he spent a year as a postdoctoral fellow at Leicester University, England. At APL, Dr. Sleight has applied computers to sci­ entific defense problems. He has served as computer systems techni­ cal advisor to the Assistant Secretary of the Navy (R&D) and on the Bal­ listic Missile Defense Advanced Technology Center's Specification Evaluation Techniques Panel. He has participated in the DoD Weapons Systems Software Manage­ ment Study, which led to the DoD directive on embedded computer software management. Dr. Sleight served as supervisor of the Advanced Systems Design Group from 1977-82 in support of the Aegis Program and the ANIUYK-43 Navy shipboard mainframe computer development and test program. Since 1982, he has served in the Director's Office, where he is responsible for computing and information systems. 287
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved