Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Verification and Validation - Lecture Slides | CMSC 435, Study notes of Software Engineering

Material Type: Notes; Professor: Zelkowitz; Class: Software Engineering; Subject: Computer Science; University: University of Maryland; Term: Spring 2009;

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-36f
koofers-user-36f 🇺🇸

10 documents

1 / 29

Toggle sidebar

Related documents


Partial preview of the text

Download Verification and Validation - Lecture Slides | CMSC 435 and more Study notes Software Engineering in PDF only on Docsity! cmsc435 - 1 Verification and Validation cmsc435 - 2 Objectives ● To introduce software verification and validation and to discuss the distinction between them ● To describe the program inspection process and its role in V & V ● To explain static analysis as a verification technique ● To describe the Cleanroom software development process cmsc435 - 3 The testing process ● Component (or unit) testing  Testing of individual program components;  Usually the responsibility of the component developer (except sometimes for critical systems);  Tests are derived from the developer’s experience. ● System testing  Testing of groups of components integrated to create a system or sub-system;  The responsibility of an independent testing team;  Tests are based on a system specification. cmsc435 - 4 Other forms of testing: Performance testing ● Part of release testing may involve testing the emergent properties of a system, such as performance and reliability. ● Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable. cmsc435 - 9 ● Verification: "Are we building the product right”.  The software should conform to its specification.  Usually a priori process ● Validation: "Are we building the right product”.  The software should do what the user really requires.  Usually testing after being built Verification vs validation cmsc435 - 10 V& V goals ● Verification and validation should establish confidence that the software is fit for purpose.  Generally this does not mean to be completely free of defects.  Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed. • Unfortunately, users accept this cmsc435 - 11 ● Software inspections. Concerned with analysis of the static system representation to discover problems (static verification)  May be supplemented by tool-based document and code analysis ● Software testing. Concerned with exercising and observing product behavior  The system is executed with test data and its operational behavior is observed Static and dynamic verification cmsc435 - 12 Static and dynamic V&V cmsc435 - 13 ● Can reveal the presence of errors NOT their absence - Dijkstra. ● The only validation technique for non- functional requirements as the software has to be executed to see how it behaves. ● Should be used in conjunction with static verification to provide full V&V coverage. Program testing cmsc435 - 14 ● Defect testing and debugging are distinct processes. ● Verification and validation is concerned with establishing the existence of defects in a program. ● Debugging is concerned with locating and repairing these errors.  Debugging involves formulating a hypothesis about program behavior then testing these hypotheses to find the system error. Testing and debugging cmsc435 - 19 Inspection pre-conditions ● A precise specification must be available. ● Team members must be familiar with the organization standards. ● Syntactically correct code or other system representations must be available. ● An error checklist should be prepared. ● Management must accept that inspection will increase costs early in the software process. ● Management should not use inspections for staff appraisal e.g., finding out who makes mistakes. cmsc435 - 20 Reading technologies All of the following techniques are an improvement over simply testing a program; however, some are more effective than others. Several related concepts: Walkthroughs: The developer of the artifact describes its structure at a meeting. Attendees look for flaws in the structure. Weakness – reviewers do not understand deep structure so error finding is weak. Code reviews: An individual who is not the developer of the artifact reads the text of the artifact looking for errors and defects in structure. Quite effective since reader does not have the same preconceived notion of what the artifact does. Review: A meeting to discuss an artifact – less formal than an inspection. A traditional checkpoint in software development cmsc435 - 21 (Fagan) Inspections Developed by Michael Fagan at IBM in 1972. Two approaches toward inspections: ● Part of development process – used to identify problems ● Part of quality assurance process – used to find unresolved issues in a finished product “Fagan inspections” are the former type. Goal is to find defects – A defect is an instance in which a requirement is not satisfied. cmsc435 - 22 Fagan Inspections ● Development process consists of series of stages (e.g., system design, design, coding, unit testing, …) ● Develop exit criteria for any artifact passing from one stage to the next ● Validate that every artifact correctly passes the exit criteria before starting on the next phase via an inspection meeting ● Everyone at the meeting must observe that all the exit criteria have been met cmsc435 - 23 Inspection process Planning – Author gives moderator an artifact to be inspected. Materials, attendees, and schedule for inspection meeting must be set. High level documentation given to attendees in addition to artifact. Overview – Moderator assigns inspection roles to participants. Preparation – Artifact given to participants before meeting to allow for study. Participants spend significant time studying artifact. Inspection – A defect logging meeting for finding defects. (Don’t fix them at meeting; Don’t assign blame; Don’t have managers present; …) Rework – The author reworks all defects. Follow-up – Verification by inspection moderator that all fixes are effective. cmsc435 - 24 Observations about inspections Costly –Participants need many hours to prepare Intensive – Limit inspections to no more than 2 hours No personnel evaluation – limits honesty in finding defects Cannot inspect too much – perhaps 250 non-commented source lines/hour. More than that causes discovery rate to drop and the need for further inspections to increase.  Up to 80% of errors found during testing could have been found during an inspection.  Lucent study (Porter—Votta) Inspection meetings often throw away false positives but find few new errors, so eliminate meeting cmsc435 - 29 The inspection process cmsc435 - 30 Inspection procedure ● System overview presented to inspection team. ● Code and associated documents are distributed to inspection team in advance. ● Inspection takes place and discovered errors are noted. ● Modifications are made to repair discovered errors. ● Re-inspection may or may not be required. cmsc435 - 31 Inspection checklists ● Checklist of common errors should be used to drive the inspection. ● Error checklists are programming language dependent and reflect the characteristic errors that are likely to arise in the language. ● In general, the 'weaker' the type checking, the larger the checklist. ● Examples: Initialization, Constant naming, loop termination, array bounds, etc. cmsc435 - 32 Inspection checks 1 Data faults Are al l program var iables initialised before their va lues are used? Have all constants been named? Should the upper bound of arrays be equal to the size of the array or S ize -1? If cha racter strings are used, is a de limiter explici tly assigned? Is there a ny possibility of buffer overflow? Control faults For each co nditional state ment, is the condition correct? Is eac h loop cer tain to terminate? Are comp ound s tatements correct ly bracketed? In case state ments, are a ll possible cases acc ounted for? If a break is required after each case in case state ments, has it been included? Input/output fau lts Are al l input variables u sed? Are al l output variab les assigned a value before they are output? Can unexpecte d inputs cause c orruption? cmsc435 - 33 Inspection checks 2 Interface faults Do all function and method calls have the correct number of parameters? Do formal and actual parameter ty pes match? Are the parameters i n the right order? If comp onents access shared memo ry, do they have the same mo del of the shared m emo ry s tructure? Storage manage me nt faults If a linked structure is modified, have all links been correc tly r eassigned? If dynamic storage is used , has space been allocated correc tly? Is space explicitly de-alloca ted after it is no longer required? Excep tion manage me nt faults Have all poss ible error conditions been taken into acco unt? cmsc435 - 34 Inspection rate ● 500 statements/hour during overview. ● 125 source statement/hour during individual preparation. ● 90-125 statements/hour can be inspected. ● Inspection is therefore an expensive process. ● Limited to a 2 hour block, maximum ● Inspecting 500 lines costs about 40 hours of effort. cmsc435 - 39 PBR: Designer role Develop tests as if you are in the role of the individual who will design the given artifact: ● Are all the necessary objects (e.g., data, types, functions) well defined? ● Are all the interfaces defined and consistent? ● Are all the data types compatible? ● Is all the necessary information available to do the design? Are all the conditions involving all objects specified? ● Are there any points that are not clear about what you should do, either because the requirement is not clear or not consistent? ● Is there anything in the requirements that you cannot implement in the design? ● Do the requirements make sense from what I know about the application or what is specified by the general description? cmsc435 - 40 Early Studies of PBR A Study of PBR [“The empirical investigation of perspective based reading” by Basili et al. In Empirical Software Engineering vol. 1 (1996) 133-164] shows that: ● 3 readers find more errors than an unstructured reading of a document. ● Two classes of documents read – a “typical” document and a “precise” document. Results more significant in the “formal” document. In the “typical” document, perspectives found fewer unique errors – Why? Problem with a study like this – Would like to review 2 documents A and B switching technologies. But not possible to do PBR on A and not do PBR on B second. So PBR always second document to read and learning effects cannot be discounted. cmsc435 - 41 Preparing for a PBR inspection Try to find unique errors – Take on unique role – view artifact from role of designer, tester, user ● Write down questions you need to ask author ● Work on artifact at your convenience to find these defects ● Estimate type and severity level of defects found ● Try to describe each defect in a few key words. ● DO NOT TRY AND FIX DEFECT. Fixing on the fly leads to: sloppy fixes and missing many other defects. Data collected at inspection meeting: ● Preparation time ● Size of artifact inspected ● Number of defects found at each severity level ● Number of unresolved questions to ask author ● Author offline fixes defect, fixes specification, decides problem isn’t a defect cmsc435 - 42 Automated static analysis ● Static analyzers are software tools for source text processing. ● They parse the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team. ● They are very effective as an aid to inspections - they are a supplement to but not a replacement for inspections. cmsc435 - 43 Static analysis checks Fault class Static analysis check Data faults Variables us ed before initialisation Variables declared but neve r us ed Variables assigned twice but never used between assi gnments Possi ble array bound v iolations Undecl ared v ariables Control faults Unreac hable code Unconditional branches into loops Input/output faults Variables output twice with no intervening assi gnment Interface faults Parameter type mis matches Parameter number mismatches Non-usage of the results of functions Uncalled functions and procedures Storage management faults Unassi gned pointers Pointer arithm etic cmsc435 - 44 Stages of static analysis ● Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc. ● Data use analysis. Detects uninitialised variables, variables written twice without an intervening assignment, variables which are declared but never used, etc. ● Interface analysis. Checks the consistency of routine and procedure declarations and their use cmsc435 - 49 The Cleanroom process cmsc435 - 50 Cleanroom process characteristics ● No unit testing! ● Formal specification using a state transition model. ● Incremental development where the customer prioritizes increments. ● Structured programming - limited control and abstraction constructs are used in the program. ● Static verification using rigorous inspections before code is ever executed. ● Statistical testing of the system (covered in Ch. 24). cmsc435 - 51 Formal specification and inspections ● The state based model is a system specification and the inspection process checks the program against this model. ● The programming approach is defined so that the correspondence between the model and the system is clear. ● Mathematical arguments (not proofs) are used to increase confidence in the inspection process. cmsc435 - 52 ● Specification team. Responsible for developing and maintaining the system specification. ● Development team. Responsible for developing and verifying the software. The software is NOT executed or even compiled during this process. ● Certification team. Responsible for developing a set of statistical tests to exercise the software after development. Reliability growth models used to determine when reliability is acceptable. Cleanroom process teams cmsc435 - 53 ● The results of using the Cleanroom process have been very impressive with few discovered faults in delivered systems. ● Independent assessment shows that the process is no more expensive than other approaches. ● There were fewer errors than in a 'traditional' development process. ● “However, the process is not widely used. It is not clear how this approach can be transferred to an environment with less skilled or less motivated software engineers.” - Sommerville  Not true! – NASA/GSFC Software Engineering Laboratory experience  So why is it not used more? Cleanroom process evaluation cmsc435 - 54 When project is completed: Postmortem analysis ● Design and promulgate a project survey to collect relevant data. ● Collect objective project information. ● Conduct a debriefing meeting. ● Conduct a project history day. ● Publish the results by focusing on lessons learned.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved