Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Notes on Software Engineering, Lecture notes of Software Engineering

Lecture notes on software engineering for B.Tech IV semester students. The course covers software process and project management, requirements analysis and specification, software design, testing and implementation, and project management. The objective of the course is to enable students to learn how to elicitate requirements and develop software life cycles, understand the design considerations for enterprise integration and deployment, analyze quality assurance techniques and testing methodologies, and prepare a project plan for a software project. The document also includes the syllabus and features of software.

Typology: Lecture notes

2021/2022

Uploaded on 05/11/2023

francyne
francyne 🇺🇸

4.7

(20)

36 documents

1 / 142

Toggle sidebar

Often downloaded together


Related documents


Partial preview of the text

Download Lecture Notes on Software Engineering and more Lecture notes Software Engineering in PDF only on Docsity! 1 LECTURE NOTES ON SOFTWARE ENGINEERING B.Tech IV Semester Ms. B DHANALAXMI Assistant Professor Mr. A.PRAVEEN Assistant Professor INFORMATION TECHNOLOGY INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) DUNDIGAL, HYDERABAD - 500 043 2 SYLLUBUS IV Semester: IT | V Semester: CSE Course Code Category Hours / Week Credits Maximum Marks ACS008 Core L T P C CIA SEE Total 3 1 - 4 30 70 100 Contact Classes: 45 Tutorial Classes: 15 Practical Classes: Nil Total Classes: 60 OBJECTIVES: The course should enable the students to: I. Learn how to elicitate requirements and develop software life cycles. II. Understand the design considerations for enterprise integration and deployment. III. Analyze quality assurance techniques and testing methodologies. IV. Prepare a project plan for a software project that includes estimates of size and effort, a schedule, resource allocation, configuration control, and project risk. UNIT-I SOFTWARE PROCESS AND PROJECT MANAGEMENT Classes: 08 Introduction to software engineering, software process, perspective and specialized process models; Software project management: Estimation: LOC and FP based estimation, COCOMO model; Project scheduling: Scheduling, earned value analysis, risk management. UNIT-II REQUIREMENTS ANALYSIS AND SPECIFICATION Classes: 09 Software requirements: Functional and nonfunctional, user requirements, system requirements, software requirements document; Requirement engineering process: Feasibility studies, requirements elicitation and analysis, requirements validation, requirements management; Classical analysis: Structured system analysis, petri nets, data dictionary. UNIT-III SOFTWARE DESIGN Classes: 09 Design process: Design concepts, design model, design heuristic, architectural design architectural styles, architectural design, and architectural mapping using data flow. User interface design: Interface analysis, interface design; Component level design: Designing class based components, traditional components. UNIT-IV TESTING AND IMPLEMENTATION Classes: 10 Software testing fundamentals: Internal and external views of testing, white box testing, basis path testing, control structure testing, black box testing, regression testing, unit testing, integration testing, validation testing, system testing and debugging; Software implementation techniques: Coding practices, refactoring. UNIT-V PROJECT MANAGEMENT Classes: 09 Estimation: FP based, LOC based, make/buy decision; COCOMO II: Planning, project plan, planning process, RFP risk management, identification, projection; RMMM: Scheduling and tracking, relationship between people and effort, task set and network, scheduling; EVA: Process and project metrics. 5 Features of Software  Its characteristics that make it different from other things human being build.  Features of such logical system:  Software is developed or engineered; it is not manufactured in the classical sense which has quality problem.  Software doesn't "wear out.‖ but it deteriorates (due to change). Hardware has bathtub curve of failure rate ( high failure rate in the beginning, then drop to steady state, then cumulative effects of dust, vibration, abuse occurs).  Although the industry is moving toward component-based construction (e.g. standard screws and off-the-shelf integrated circuits), most software continues to be custom-built. Modern reusable components encapsulate data and processing into software parts to be reused by different programs. E.g. graphical user interface, window, pull-down menus in library etc. Software Applications I. System software: such as compilers, editors, file management utilities II. Application software: stand-alone programs for specific needs. III. Engineering/scientific software: Characterized by ―number crunching‖ algorithms. such as automotive stress analysis, molecular biology, orbital dynamics etc IV. Embedded software resides within a product or system. (key pad control of a microwave oven, digital function of dashboard display in a car) V. Product-line software focus on a limited marketplace to address mass consumer market. (word processing, graphics, database management) VI. WebApps (Web applications) network centric software. As web 2.0 emerges, more sophisticated computing environments is supported integrated with remote database and business applications. VII. AI software uses non-numerical algorithm to solve complex problem. Robotics, expert system, pattern recognition game playing Software Engineering Definition The seminal definition: [Software engineering is] the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines. The IEEE definition: Software Engineering: (1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1). 6 Software Engineering A Layered Technology • Any engineering approach must rest on organizational commitment to quality which fosters a continuous process improvement culture. • Process layer as the foundation defines a framework with activities for effective delivery of software engineering technology. Establish the context where products (model, data, report, and forms) are produced, milestones are established, quality is ensured and change is managed. • Method provides technical how-to‘s for building software. It encompasses many tasks including communication, requirement analysis, design modeling, program construction, testing and support. • Tools provide automated or semi-automated support for the process and methods Software Process • A process is a collection of activities, actions and tasks that are performed when some work product is to be created. It is not a rigid prescription for how to build computer software. Rather, it is an adaptable approach that enables the people doing the work to pick and choose the appropriate set of work actions and tasks. • Purpose of process is to deliver software in a timely manner and with sufficient quality to satisfy those who have sponsored its creation and those who will use it. Five Activities of a Generic Process Framework • Communication: communicate with customer to understand objectives and gather requirements. • Planning: creates a ―map‖ defines the work by describing the tasks, risks and resources, work products and work schedule. • Modeling: Create a ―sketch‖, what it looks like architecturally, how the constituent parts fit together and other characteristics. • Construction: code generation and the testing. • Deployment: Delivered to the customer who evaluates the products and provides feedback based on the evaluation. These five framework activities can be used to all software development regardless of the application domain, size of the project, complexity of the efforts etc, though the details will be different in each case. 7 For many software projects, these framework activities are applied iteratively as a project progresses. Each iteration produces a software increment that provides a subset of overall software features and functionality. Umbrella Activities Complement the five process framework activities and help team manage and control progress, quality, change, and risk. • Software project tracking and control: assess progress against the plan and take actions to maintain the schedule. • Risk management: assesses risks that may affect the outcome and quality. • Software quality assurance: defines and conduct activities to ensure quality. • Technical reviews: assesses work products to uncover and remove errors before going to the next activity. • Measurement: define and collects process, project, and product measures to ensure stakeholder‘s needs are met. • Software configuration management: manage the effects of change throughout the software process. • Reusability management: defines criteria for work product reuse and establishes mechanism to achieve reusable components. • Work product preparation and production: create work products such as models, documents, logs, forms and lists. Adapting a Process Model The process should be agile and adaptable to problems. Process adopted for one project might be significantly different than a process adopted from another project. (to the problem, the project, the team, organizational culture). Among the differences are: • the overall flow of activities, actions, and tasks and the interdependencies among them • the degree to which actions and tasks are defined within each framework activity • the degree to which work products are identified and required • the manner which quality assurance activities are applied • the manner in which project tracking and control activities are applied • the overall degree of detail and rigor with which the process is described • the degree to which the customer and other stakeholders are involved with the project • the level of autonomy given to the software team • the degree to which team organization and roles are prescribed 10 Process Pattern Types • Stage patterns—defines a problem associated with a framework activity for the process. It includes multiple task patterns as well. For example, Establishing Communication would incorporate the task pattern Requirements Gathering and others. • Task patterns—defines a problem associated with a software engineering action or work task and relevant to successful software engineering practice • Phase patterns—define the sequence of framework activities that occur with the process, even when the overall flow of activities is iterative in nature. Example includes Sprial Model or Prototyping. An Example of Process Pattern • Describes an approach that may be applicable when stakeholders have a general idea of what must be done but are unsure of specific software requirements. • Pattern name. Requirement Unclear • Intent. This pattern describes an approach for building a model that can be assessed iteratively by stakeholders in an effort to identify or solidify software requirements. • Type. Phase pattern • Initial context. Conditions must be met (1) stakeholders have been identified; (2) a mode of communication between stakeholders and the software team has been established; (3) the overriding software problem to be solved has been identified by stakeholders ; (4) an initial understanding of project scope, basic business requirements and project constraints has been developed. • Problem. Requirements are hazy or nonexistent. Stakeholders are unsure of what they want. • Solution. A description of the prototyping process would be presented here. • Resulting context. A software prototype that identifies basic requirements. (modes of interaction, computational features, processing functions) is approved by stakeholders. Following this, 1. This prototype may evolve through a series of increments to become the production software or 2. the prototype may be discarded. • Related patterns. Customer Communication, Iterative Design, Iterative Development, Customer Assessment, Requirement Extraction. Process Assessment and Improvement • The existence of a software process is no guarantee that software will be delivered on time, that it will meet the customer‘s needs, or that it will exhibit the technical characteristics that will lead to long-term quality characteristics. • A number of different approaches to software process assessment and improvement have been proposed over the past few decades: • Standard CMMI Assessment Method for Process Improvement (SCAMPI)—provides a five-step process assessment model that incorporates five phases: initiating, diagnosing, establishing, acting, and learning. The SCAMPI method uses the SEI CMMI as the basis for assessment [SEI00]. • CMM-Based Appraisal for Internal Process Improvement (CBA IPI)— provides a diagnostic technique for assessing the relative maturity of a software organization; uses the SEI CMM as the basis for the assessment [Dun01]. 11 • SPICE (ISO/IEC15504)—a standard that defines a set of requirements for software process assessment. The intent of the standard is to assist organizations in developing an objective evaluation of the efficacy of any defined software process [ISO08]. • ISO 9001:2000 for Software—a generic standard that applies to any organization that wants to improve the overall quality of the products, systems, or services that it provides. Therefore, the standard is directly applicable to software organizations and companies [Ant06]. Prescriptive Process Models • Classic Process Models - Waterfall Model (Linear Sequential Model) • Incremental Process Models - Incremental Model • Evolutionary Software Process Models • Prototyping • Spiral Model • Concurrent Development Model 1. Classic Process Models - Waterfall Model (Linear Sequential Model) • The waterfall model, sometimes called the classic life cycle. • It is the oldest paradigm for Software Engineering. When requirements are well defined and reasonably stable, it leads to a linear fashion • The waterfall model, sometimes called the classic life cycle, suggests a systematic, • sequential approach to software development that begins with customer specification of requirements and progresses through planning, modeling, construction, and deployment, culminating in ongoing support of the completed software. A variation of waterfall model depicts the relationship of quality assurance actions to the actions associated with communication, modeling and early code construction activates. 12 Team first moves down the left side of the V to refine the problem requirements. Once code is generated, the team moves up the right side of the V, performing a series of tests that validate each of the models created as the team moved down the left side. The V-model provides a way of visualizing how verification and validation actions are applied to earlier engineering work. The V Model The problems that are sometimes encountered when the waterfall model is applied are: • Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iteration, it does so indirectly. As a result, changes can cause confusion as the project team proceeds. • It is often difficult for the customer to state all requirements explicitly. The waterfall model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects. • The customer must have patience. A working version of the program(s) will not be available until late in the project time span. A major blunder, if undetected until the working program is reviewed, can be disastrous. 2. Incremental Process Models - Incremental Model • When initial requirements are reasonably well defined, but the overall scope of the development effort precludes a purely linear process. A compelling need to expand a limited set of new functions to a later system release. • It combines elements of linear and parallel process flows. Each linear sequence produces deliverable increments of the software. • The first increment is often a core product with many supplementary features. Users use it and evaluate it with more modifications to better meet the needs. • The incremental process model focuses on the delivery of an operational product with each increment. Early increments are stripped-down versions of the final product, but they do 15 • Each pass results in adjustments to the project plan. Cost and schedule are adjusted based on feedback. Also, the number of iterations will be adjusted by project manager. • Good to develop large-scale system as software evolves as the process progresses and risk should be understood and properly reacted to. Prototyping is used to reduce risk. • However, it may be difficult to convince customers that it is controllable as it demands considerable risk assessment expertise. Concurrent Model • Allow a software team to represent iterative and concurrent elements of any of the process models. For example, the modeling activity defined for the spiral model is accomplished by invoking one or more of the following actions: prototyping, analysis and design. • The Figure shows modeling may be in any one of the states at any given time. For example, communication activity has completed its first iteration and in the awaiting changes state. The modeling activity was in inactive state, now makes a transition into the under development state. If customer indicates changes in requirements, the modeling activity moves from the under development state into the awaiting changes state. • Concurrent modeling is applicable to all types of software development and provides an accurate picture of the current state of a project. Rather than confining software engineering activities, actions and tasks to a sequence of events, it defines a process network. Each activity, action or task on the network exists simultaneously with other activities, actions or tasks. Events generated at one point trigger transitions among the state. 16 Specialized Process Models Specialized process models take on many of the characteristics of one or more of the traditional models. However, these models tend to be applied when a specialized or narrowly defined software engineering approach is chosen. • Component-Based Development • The Formal Methods Model • Aspect-Oriented Software Development 1. Component-Based Development: Commercial off-the-shelf (COTS) software components, developed by vendors who offer them as products, provide targeted functionality with well-defined interfaces that enable the component to be integrated into the software that is to be built. These components can be as either conventional software modules or object-oriented packages or packages of classes Steps involved in CBS are • Available component-based products are researched and evaluated for the application domain in question. • Component Integration issues are considered. • A software architecture is designed to accommodate the components • Components are integrated into the architecture • Comprehensive testing is conducted to ensure proper functionality • Component-based development model leads to software reuse and reusability helps software engineers with a number of measurable benefits • Component-based development leads to a 70 percent reduction in development cycle time, 84 percent reduction in project cost and productivity index of 26.2 compared to an industry norm of 16.9 2. Formal Methods Model • Formal methods model encompasses a set of activities that leads to formal mathematical specification of computer software • They enable software engineers to specify, develop and verify a computer-based system by applying a rigorous mathematical notation • Development of formal models is quite time consuming and expensive • Extensive training is needed in applying formal methods • Difficult to use the model as a communication mechanism for technically unsophisticated customers 3. Aspect-oriented Software Development • The aspect-oriented approach is based on the principle of identifying common program code within certain aspects and placing the common procedures outside the main business logic 17 • The process of aspect orientation and software development may include modeling, design, programming, reverse-engineering and re-engineering; • The domain of AOSD includes applications, components and databases; • Interaction with and integration into other paradigms is carried out with the help of frameworks, generators, program languages and architecture-description languages (ADL). The Unified process, personal and team process models • The Unified Process is an iterative and incremental development process. Unified Process divides the project into four phases 1. Inception 2. Elaboration 3. Construction 4. Transition • The Inception, Elaboration, Construction and Transition phases are divided into a series of time boxed iterations. (The Inception phase may also be divided into iterations for a large project.) • Each iteration results in an increment, which is a release of the system that contains added or improved functionality compared with the previous release. • Although most iterations will include work in most of the process disciplines (e.g. Requirements, Design, Implementation, Testing) the relative effort and emphasis will change over the course of the project. • Risk Focused – The Unified Process requires the project team to focus on addressing the most critical risks early in the project life cycle. The deliverables of each iteration, especially in the Elaboration phase, must be selected in order to ensure that the greatest risks are addressed first. Risk Focused • Inception Phase – Inception is the smallest phase in the project, and ideally it should be quite short. If the Inception Phase is long then it is usually an indication of excessive up-front specification, which is contrary to the spirit of the Unified Process. – The following are typical goals for the Inception phase. • Establish a justification or business case for the project • Establish the project scope and boundary conditions • Outline the use cases and key requirements that will drive the design tradeoffs • Outline one or more candidate architectures • Identify risks • Prepare a preliminary project schedule and cost estimate – The Lifecycle Objective Milestone marks the end of the Inception phase. • Elaboration Phase – During the Elaboration phase the project team is expected to capture a majority of the system requirements. The primary goals of Elaboration are to address known risk factors and to establish and validate the system architecture. – Common processes undertaken in this phase include the creation of use case diagrams, conceptual diagrams (class diagrams with only basic notation) and package diagrams (architectural diagrams). 20 Software project management: Estimation Estimation is attempt to determine how much money, effort, resources & time it will take to build a specific software based system or project. Estimation involves answering the following questions: 1. How much effort is required to complete each activity? 2. How much calendar time is needed to complete each activity? 3. What is the total cost of each activity? Project cost estimation and project scheduling are normally carried out together. The costs of development are primarily the costs of the effort involved, so the effort computation is used in both the cost and the schedule estimate. Do some cost estimation before detailed schedules are drawn up. These initial estimates may be used to establish a budget for the project or to set a price for the software for a customer. There are three parameters involved in computing the total cost of a software development project: • Hardware and software costs including maintenance • Travel and training costs • Effort costs (the costs of paying software engineers). The following costs are all part of the total effort cost: 1. Costs of providing, heating and lighting office space 2. Costs of support staff such as accountants, administrators, system managers, cleaners and technicians 3. Costs of networking and communications 4. Costs of central facilities such as a library or recreational facilities 5. Costs of Social Security and employee benefits such as pensions and health insurance. Factors affecting software pricing 21 Introduction about LOC and FP based estimation Function Points: • STEP 1: measure size in terms of the amount of functionality in a system. Function points are computed by first calculating an unadjusted function point count (UFC). Counts are made for the following categories – External inputs – those items provided by the user that describe distinct application- oriented data (such as file names and menu selections) – External outputs – those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than the individual components of these) – External inquiries – interactive inputs requiring a response – External files – machine-readable interfaces to other systems – Internal files – logical master files in the system • STEP 2: Multiply each number by a weight factor, according to complexity (simple, average or complex) of the parameter, associated with that number. The value is given by a table: • STEP 3: Calculate the total UFP (Unadjusted Function Points) • STEP 4: Calculate the total TCF (Technical Complexity Factor) by giving a value between 0 and 5 according to the importance of the following points: • STEP 5: Sum the resulting numbers too obtain DI (degree of influence) • STEP 6: TCF (Technical Complexity Factor) by given by the formula – TCF=0.65+0.01*DI • STEP 6: Function Points are by given by the formula – FP=UFP*TCF Relation between LOC and FP – LOC = Language Factor * FP – where • LOC (Lines of Code) • FP (Function Points) The Basic COCOMO model computes effort as a function of program size. The Basic COCOMO equation is: – E = aKLOC^b • Effort for three modes of Basic COCOMO. 22 Mode a b Organic 2.4 1.05 Semi-detached 3.0 1.12 Embedded 3.6 1.20 • The intermediate COCOMO model computes effort as a function of program size and a set of cost drivers. The Intermediate COCOMO equation is: – E = aKLOC^b*EAF • Effort for three modes of intermediate COCOMO. Mode a b Organic 2.4 1.05 Semi-detached 3.0 1.12 Embedded 3.6 1.20 Total EAF = Product of the selected factors Adjusted value of Effort: Adjusted Person Months: APM = (Total EAF) * PM A development process typically consists of the following stages: • Requirements Analysis • Design (High Level + Detailed) • Implementation & Coding • Testing (Unit + Integration) Error Estimation • Calculate the estimated number of errors in your design, i.e.total errors found in requirements, specifications, code, user manuals, and bad fixes: – Adjust the Function Point calculated in step1 AFP = FP ** 1.25 – Use the following table for calculating error estimates 25 • Average Staffing Equation – (PM) / (TDEV) (FSP) • where FSP means Full-time-equivalent Software Personnel. COCOMO is defined in terms of three different models: – Basic model, – Intermediate model, and – Detailed model. • The more complex models account for more factors that influence software projects, and make more accurate estimates. • The most important factors contributing to a project's duration and cost is the Development Mode • Organic Mode: The project is developed in a familiar, stable environment, and the product is similar to previously developed products. The product is relatively small, and requires little innovation. • Semidetached Mode: The project's characteristics are intermediate between Organic and Embedded. • Embedded Mode: The project is characterized by tight, inflexible constraints and interface requirements. An embedded mode project will require a great deal of innovation. Feature Organic Semidetached Embedded Organizational understanding of product and objectives Thorough Considerable General Experience in working with related software systems Extensive Considerable Moderate Need for software conformance with pre-established requirements Basic Considerable Full Need for software conformance with external interface specifications Basic Considerable Full Concurrent development of associated new hardware and operational procedures Some Moderate Extensive Need for innovative data processing architectures, algorithms Minimal Some Considerable Premium on early completion Low Medium High Product size range <50 KDSI <300KDSI All 26 SCHEDULING:  You‘ve selected an appropriate process model.  You‘ve identified the software engineering tasks that have to be performed.  You estimated the amount of work and the number of people, you know the deadline, you‘ve even considered the risks.  Now it‘s time to connect the dots. That is, you have to create a network of software engineering tasks that will enable you to get the job done on time.  Once the network is created, you have to assign responsibility for each task, make sure it gets done, and adapt the network as risks become reality.  Why it‘s Important?  In order to build a complex system, many software engineering tasks occur in parallel.  The result of work performed during one task may have a profound effect on work to be conducted in another task.  These interdependencies are very difficult to understand without a schedule.  lt‘s also virtually impossible to assess progress on a moderate or large software project without a detailed schedule  What are the steps?  The software engineering tasks dictated by the software process model are refined for the functionality to be built.  Effort and duration are allocated to each task and a task network (also called an ―activity network‖) is created in a manner that enables the software team to meet the delivery deadline established. Basic Concept of Project Scheduling  An unrealistic deadline established by someone outside the software development group and forced on managers and practitioner's within the group.  Changing customer requirements that are not reflected in schedule changes.  An honest underestimate of the amount of effort and/or the number of resources that will be required to do the job.  Predictable and/or unpredictable risks that were not considered when the project commenced.  Technical difficulties that could not have been foreseen in advance.  Why should we do when the management demands that we make a dead line I impossible?  Perform a detailed estimate using historical data from past projects.  Determine the estimated effort and duration for the project.  Using an incremental process model, develop a software engineering strategy that will deliver critical functionality by the imposed deadline, but delay other functionality until later. Document the plan.  Meet with the customer and (using the detailed estimate), explain why the imposed deadline is unrealistic. 27 Project Scheduling I. Basic Principles II. The Relationship Between People and Effort III. Effort Distribution • Software project scheduling is an action that distributes estimated effort across the planned project duration by allocating the effort to specific software engineering tasks. • During early stages of project planning, a macroscopic schedule is developed. • As the project gets under way, each entry on the macroscopic schedule is refined into a detailed schedule. 1. Basic Principles of Project Scheduling. 1. Compartmentalization: The project must be compartmentalized into a number of manageable activities and tasks. To accomplish compartmentalization, both the product and the process are refined. 2. Interdependency: The interdependency of each compartmentalized activity or task must be determined. Some tasks must occur in sequence, while others can occur in parallel. Other activities can occur independently. 3. Time allocation: Each task to be scheduled must be allocated some number of work units (e.g., person‐ days of effort). In addition, each task must be assigned a start date and a completion date. Whether work will be conducted on a full-time or part-time basis. 4. Effort validation: Every project has a defined number of people on the software team. The project manager must ensure that no more than the allocated number of people have been scheduled at any given time. 5. Defined responsibilities. Every task that is scheduled should be assigned to a specific team member. 6. Defined outcomes: Every task that is scheduled should have a defined outcome. For software projects, the outcome is normally a work product (e.g., the design of a component) or a part of a work product. Work products are often combined in deliverables. 7. Defined milestones: Every task or group of tasks should be associated with a project milestone. A milestone is accomplished when one or more work products has been reviewed for quality and has been approved. Each of these principles is applied as the project schedule evolves. 2. The Relationship between People and Effort • In a small software development project a single person can analyze requirements, perform design, generate code, and conduct tests. As the size of a project increases, more people must become involved. • There is a common myth that is still believed by many managers who are responsible for software development projects: ―If we fall behind schedule, we can always add more programmers and catch up later in the project.‖ • Unfortunately, adding people late in a project often has a disruptive effect on the project, causing schedules to slip even further. The people who are added must learn the system, and the people who teach them are the same people who were doing the work. 30 3. Tracking Progress for an OO Project Technical milestone: OO analysis complete o All hierarchy classes defined and reviewed o Class attributes and operations are defined and reviewed o Class relationships defined and reviewed o Behavioral model defined and reviewed o Reusable classed identified Technical milestone: OO design complete o Subsystems defined and reviewed o Classes allocated to subsystems and reviewed o Task allocation has been established and reviewed o Responsibilities and collaborations have been identified o Attributes and operations have been designed and reviewed o Communication model has been created and reviewed Technical milestone: OO programming complete o Each new design model class has been implemented o Classes extracted from the reuse library have been implemented o Prototype or increment has been built Technical milestone: OO testing o The correctness and completeness of the OOA and OOD models has been reviewed o Class-responsibility-collaboration network has been developed and reviewed o Test cases are designed and class-level tests have been conducted for each class o Test cases are designed, cluster testing is completed, and classes have been integrated o System level tests are complete Scheduling for WebApp Projects • WebApp project scheduling distributes estimated effort across the planned time line (duration) for building each WebApp increment. • This is accomplished by allocating the effort to specific tasks. • The overall WebApp schedule evolves over time. • During the first iteration, a macroscopic schedule is developed. • This type of schedule identifies all WebApp increments and projects the dates on which each will be deployed. • As the development of an increment gets under way, the entry for the increment on the macroscopic schedule is refined into a detailed schedule. • Here, specific development tasks (required to accomplish an activity) are identified and scheduled. 31 EARNED VALUE ANALYSIS: • It is reasonable to ask whether there is a quantitative technique for assessing progress as the software team progresses through the work tasks allocated to the project schedule. • A Technique for performing quantitative analysis of progress does exist. It is called earned value analysis (EVA). • To determine the earned value, the following steps are performed: 1. The budgeted cost of work scheduled (BCWS) is determined for each work task represented in the schedule. During estimation, the work (in person-hours or person-days) of each software engineering task is planned. Hence, BCWSi is the effort planned for work task i. To determine progress at a given point along the project schedule, the value of BCWS is the sum of the BCWSi values for all work tasks that should have been completed by that point in time on the project schedule. 2. The BCWS values for all work tasks are summed to derive the budget at completion (BAC). Hence, BAC (BCWSk) for all tasks k 3. Next, the value for budgeted cost of work performed (BCWP) is computed. The value for BCWP is the sum of the BCWS values for all work tasks that have actually been completed by a point in time on the project schedule. • Given values for BCWS, BAC, and BCWP, important progress indicators can be computed: Schedule performance index, SPI = BCWP / BCWS Schedule variance, SV = BCWP – BCWS • SPI is an indication of the efficiency with which the project is utilizing scheduled resources. An SPI value close to 1.0 indicates efficient execution of the project schedule. SV is simply an absolute indication of variance from the planned schedule. • Percent scheduled for completion = BCWS / BAC provides an indication of the percentage of work that should have been completed by time t. • Percent complete = BCWP / BAC provides a quantitative indication of the percent of completeness of the project at a given point in time t. It is also possible to compute the actual cost of work performed (ACWP). The value for ACWP is the sum of the effort actually expended on work tasks that have been completed by a point in time on the project schedule. It is then possible to compute Cost performance index, CPI = BCWP /ACWP Cost variance, CV = BCWP - ACWP A CPI value close to 1.0 provides a strong indication that the project is within its defined budget. CV is an absolute indication of cost savings (against planned costs) or shortfall at a particular stage of a project. 32 RFP RISK MANAGEMENT: A Hazard is Any real or potential condition that can cause injury, illness, or death to personnel; damage to or loss of a system, equipment or property; or damage to the environment. Simpler.... A threat of harm. A hazard can lead to one or several consequences. A Risk is  The expectation of a loss or damage (consequence)  The combined severity and probability of a loss  The long term rate of loss A potential problem (leading to a loss) that may - or may not occur in the future. • Risk Management is a set of practices and support tools to identify, analyze, and treat risks explicitly. • Treating a risk means understanding it better, avoiding or reducing it (risk mitigation), or preparing for the risk to materialize. • Risk management tries to reduce the probability of a risk to occur and the impact (loss) caused by risks. Reactive versus Proactive Risk Strategies  Software risks  Reactive versus Proactive Risk Strategies • The majority of software teams rely solely on reactive risk strategies. At best, a reactive strategy monitors the project for likely risks. Resources are set aside to deal with them, should they become actual problems. • The software team does nothing about risks until something goes wrong. Then, the team flies into action in an attempt to correct the problem rapidly. This is often called a fire- fighting mode. • A considerably more intelligent strategy for risk management is to be proactive. • A proactive strategy begins long before technical work is initiated. Potential risks are identified, their probability and impact are assessed, and they are ranked by importance. Then, • The software team establishes a plan for managing risk. The primary objective is to avoid risk, but because not all risks can be avoided, the team works to develop a contingency plan that will enable it to respond in a controlled and effective manner. Risk always involves two characteristics: • Risk always involves two characteristics: uncertainty—the risk may or may not happen; that is, there are no 100 percent probable risks—and loss—if the risk becomes a reality, unwanted consequences or losses will occur. • When risks are analyzed, it is important to quantify the level of uncertainty and the degree of loss associated with each risk. 35 • A non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors • The plan for implementing non-functional requirements is detailed in the system architecture. • Non-functional requirements are often called qualities of a system. Other terms for non- functional requirements are "constraints", "quality attributes", "quality goals", "quality of service requirements" and "non-behavioral requirements • These define system properties and constraints e.g. reliability, response time and storage requirements. Constraints are I/O device capability, system representations, etc. • Process requirements may also be specified mandating a particular CASE system, programming language or development method. • Non-functional requirements may be more critical than functional requirements. If these are not met, the system may become useless. NON FUNCTIONAL REQUIREMENTS • A non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors • The plan for implementing non-functional requirements is detailed in the system architecture. • Non-functional requirements are often called qualities of a system. Other terms for non- functional requirements are "constraints", "quality attributes", "quality goals", "quality of service requirements" and "non-behavioral requirements • These define system properties and constraints e.g. reliability, response time and storage requirements. Constraints are I/O device capability, system representations, etc. • Process requirements may also be specified mandating a particular CASE system, programming language or development method. • Non-functional requirements may be more critical than functional requirements. If these are not met, the system may become useless. Non-functional Requirements classifications • Product requirements – Requirements which specify that the delivered product must behave in a particular way e.g. execution speed, reliability, etc. • Organisational requirements – Requirements which are a consequence of organisational policies and procedures e.g. process standards used, implementation requirements, etc. • External requirements – Requirements which arise from factors which are external to the system and its development process e.g. interoperability requirements, legislative requirements, etc. • Non-functional requirement types 36 Non – functional requirements examples • Product requirement The user interface for the system shall be implemented as simple HTML without frames or Java applets. • Organisational requirement The system development process and deliverable documents shall conform to the process and deliverables defined in XYZCo-SP-STAN-95. • External requirement The system shall not disclose any personal information about customers apart from their name and reference number to the operators of the system. Non-Functional Requirements measures Property Measure Speed Processed transactions/second User/Event response time Screen refresh time Size M Bytes Number of ROM chips Ease of use Training time Number of help frames Reliability Mean time to failure Probability of unavailability Rate of failure occurrence Availability Robustness Time to restart after failure Percentage of events causing failure Probability of data corruption on failure Portability Percentage of target dependent statements Number of target systems User requirements and System requirements: 37 Business Requirements: • A high-level business objective of the organization that builds a product or of a customer who procures it • Generally stated by the business owner or sponsor of the project – Example: A system is needed to track the attendance of employees – A system is needed to account the inventory of the organization Contents of Business Requirements: • Purpose, Inscope, Out of Scope, Targeted Audiences • Use Case diagrams • Data Requirements • Non Functional Requirements • Interface Requirements • Limitations • Risks • Assumptions • Reporting Requirements • Checklists User requirements • A user requirement refers to a function that the user requires a system to perform. • Made through statements in natural language and diagrams of the services the system provides and its operational constraints. Written for customers. • User requirements are set by client and confirmed before system development. – For example, in a system for a bank the user may require a function to calculate interest over a set time period. System Requirements • A system requirement is a more technical requirement, often relating to hardware or software required for a system to function. – System requirements may be something like - "The system must run on a server with IIS – System requirements may also include validation requirements such as "File upload is limited to .xls format • System requirements are more commonly used by developers throughout the development life cycle. The client will usually have less interest in these lower level requirements. • A structured document setting out detailed descriptions of the system‘s functions, services and operational constraints. The Software Requirements Specifications (SRS) Document 40 • Glossary – This should define the technical terms used in the document. Should not make assumptions about the experience or expertise of the reader • User Requirements Definition – The services provided for the user and the non-functional system requirements should be described in this section. This description may use natural language, diagrams or other notations that are understandable by customers. Product and process standards which must be followed should be specified • System Architecture – This chapter should present a high-level overview of the anticipated system architecture showing the distribution of functions across modules. Architectural components that re reused should be highlighted • System Requirements Specification – This should describe the functional and non-functional requirements in more detail. If necessary, further detail may also be added to the non-functional requirements e.g. interfaces to other systems may be defined • System Models – This should set out one or more system models showing the relationships between the system components and the system and its environment. These might be object models and data-flow models • System Evolution – This should describe the fundamental assumptions on which the system is based and anticipated changes due to hardware evolution, changing user needs etc • Appendices – These should provide detailed, specific information which is related to the application which is being developed. E.g. Appendices that may include hardware and database descriptions. • Index – Several indexes to the document may be included Requirements Engineering Processes: 41 • A customer says ― I know you think you understand what I said, but what you don‘t understand is what I said is not what I mean‖ • Requirement engineering helps software engineers to better understand the problem to solve. • It is carried out by software engineers (analysts) and other project stakeholders • It is important to understand what the customer wants before one begins to design and build a computer based system • Work products include user scenarios, functions and feature lists, analysis models Requirements engineering (RE) is a systems and software engineering process which covers all of the activities involved in discovering, documenting and maintaining a set of requirements for a computer-based system The processes used for RE vary widely depending on the application domain, the people involved and the organisation developing the requirements. • Activities within the RE process may include: – Requirements elicitation - discovering requirements from system stakeholders – Requirements Analysis and negotiation - checking requirements and resolving stakeholder conflicts – Requirements specification (Software Requirements Specification)- documenting the requirements in a requirements document – System modeling - deriving models of the system, often using a notation such as the Unified Modeling Language – Requirements validation - checking that the documented requirements and models are consistent and meet stakeholder needs – Requirements management - managing changes to the requirements as the system is developed and put into use • Requirements Engineering Processes: Feasibility studies 42 • The purpose of feasibility study is not to solve the problem, but to determine whether the problem is worth solving. • A feasibility study decides whether or not the proposed system is worthwhile. • The feasibility study concentrates on the following area. – Operational Feasibility – Technical Feasibility – Economic Feasibility • A short focused study that checks – If the system contributes to organisational objectives; – If the system can be engineered using current technology and within budget; – If the system can be integrated with other systems that are used. Based on information assessment (what is required), information collection and report writing. • Questions for people in the organisation – What if the system wasn‘t implemented? – What are current process problems? – How will the proposed system help? – What will be the integration problems? – Is new technology needed? What skills? – What facilities must be supported by the proposed system? Requirement Elicitation and Analysis: Requirement discovery, Interviewing, Requirements analysis in systems engineering and software engineering, encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. Requirements analysis is critical to the success of a systems or software project. The requirements should be documented, actionable, measurable, testable, traceable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Requirements analysis includes three types of activities – Eliciting requirements: The task of identifying the various types of requirements from various sources including project documentation, (e.g. the project charter or definition), business process documentation, and stakeholder interviews. This is sometimes also called requirements gathering. – Analyzing requirements: determining whether the stated requirements are clear, 45 Goals of the Interview – At each level, each phase, and with each interviewee, an interview may be conducted to: • Gather information on the company • Gather information on the function • Gather information on processes or activities • Uncover problems • Conduct a needs determination • Verification of previously gathered facts • Gather opinions or viewpoints • Provide information • Obtain leads for further interviews Interviews are two types of interview – Closed interviews where a pre-defined set of questions are answered. – Open interviews where there is no pre-defined agenda and a range of issues are explored with stakeholders. • Normally a mix of closed and open-ended interviewing is undertaken. • Interviews are not good for understanding domain requirements – Requirements engineers cannot understand specific domain terminology; – Some domain knowledge is so familiar that people find it hard to articulate or think that it isn‘t worth articulating. • Effective Interviewers – Interviewers should be open-minded, willing to listen to stakeholders and should not have pre-conceived ideas about the requirements. – They should prompt the interviewee with a question or a proposal and should not simply expect them to respond to a question such as ‗what do you want‘. • Information form interviews supplement other information about the system from documents, user observations, and so on • Sometimes, apart from information from documents, interviews may be the only source of information about the system requirements • It should be used alongside other requirements elicitation techniques 46 Scenarios, Use cases, Ethnography: 1. Scenarios: • Scenarios are real-life examples of how a system can be used. • Scenarios can be particularly useful for adding detail to an outline requirements description. • Each scenario covers one or more possible interactions • Several forms of scenarios can be developed, each of which provides different types of information at different levels of detail about the system • Scenarios may be written as text, supplemented by diagrams, screen shots and so on. • A scenario may include – A description of the starting situation; – A description of the normal flow of events; – A description of what can go wrong; – Information about other concurrent activities that might be going on at the same time – A description of the system state when the scenario finishes. Scenario-based elicitation involves working with stakeholders to identify scenarios and to capture details to be included in these scenarios. Scenarios may be written as text, supplemented by diagrams, screen shots, etc. Alternatively, a more structured approach such as event scenarios or use cases may be used. 2. Use Cases • Use-cases are a scenario based technique in the UML which identify the actors in an interaction and which describe the interaction itself. • A set of use cases should describe all possible interactions with the system. • Sequence diagrams may be used to add detail to use-cases by showing the sequence of event processing in the system. • Use-case approach helps with requirements prioritization Article printing use-case 47 A Use case can have high priority for – It describes one of the business process that the system enables – Many users will use it frequently – A favoured user class requested it – It provides capability that‘s required for regularity compliance – Other system functions depend on its presence Social and organisational factors • Software systems are used in a social and organisational context. This can influence or even dominate the system requirements. • Social and organisational factors are not a single viewpoint but have influences on all viewpoints. • Good analysts must be sensitive to these factors but currently no systematic way to tackle their analysis. 3. Ethnography • A social scientists spends a considerable time observing and analysing how people actually work. • People do not have to explain or articulate their work. • Social and organisational factors of importance may be observed. • Ethnographic studies have shown that work is usually richer and more complex than suggested by simple system models. Focused ethnography • Developed in a project studying the air traffic control process • Combines ethnography with prototyping • Prototype development results in unanswered questions which focus the ethnographic analysis. • The problem with ethnography is that it studies existing practices which may have some historical basis which is no longer relevant. Ethnography and prototyping The ethnography informs the development of the prototype so that fewer prototype refinement cycles are required. Furthermore, the prototyping focuses the ethnography by identifying problems and questions that can then be discussed with the ethnographer. 50 Volatile requirements – These are requirements that are likely to change during the system development process or after the system has been become operational. – Examples of volatile requirements are requirements resulting from government health-care policies or healthcare charging mechanisms. Traceability • Traceability is concerned with the relationships between requirements, their sources and the system design • Source traceability – Links from requirements to stakeholders who proposed these requirements; • Requirements traceability – Links between dependent requirements; • Design traceability – Links from the requirements to the design; • Requirements storage – Requirements should be managed in a secure, managed data store. • Change management – The process of change management is a workflow process whose stages can be defined and information flow between these stages partially automated. • Traceability management – Automated retrieval of the links between requirements. Requirements Management Planning • During the requirements engineering process, one has to plan: – Requirements identification • How requirements are individually identified; – A change management process • The process followed when analysing a requirements change; – Traceability policies • The amount of information about requirements relationships that is maintained; – CASE tool support • The tool support required to help manage requirements change; • Should apply to all proposed changes to the requirements. • Principal stages – Problem analysis. Discuss requirements problem and propose change; – Change analysis and costing. Assess effects of change on other requirements; – Change implementation. Modify requirements document and other documents to reflect change. 51 Classical Analysis: Structured system analysis: • Throughout the phases of analysis and design, the analyst should proceed step by step, obtaining feedback from users and analyzing the design for omissions and errors. • Moving too quickly to the next phase may require the analyst to rework portions of the design that were produced earlier. • They Structured a project into small, well-defines activities and specify the sequence and interaction of these activities. • They use diagrammatic and other modeling techniques to give a more precise (structured) definition that is understandable by both users and developers. • Structured analysis provides a clear requirements statements that everyone can understand and is a firm foundation for subsequent design and implementation. • Part of the problem with systems analysts just asking ‗the right questions‘ that it is often difficult for a technical person to describe the system concepts back to the user can understand. • Structured methods generally include the use of easily understood, non technical diagrammatic techniques. • It is important that these diagram do not contain computer jargon and technical detail that the user wont understand – and does not need understand. High-Level Petri Nets • The classical Petri net was invented by Carl Adam Petri in 1962. • A lot of research has been conducted (>10,000 publications). • Until 1985 it was mainly used by theoreticians. • Since the 80‘s their practical use has increased because of the introduction of high-level Petri nets and the availability of many tools. • High-level Petri nets are Petri nets extended with o colour (for the modelling of attributes) o time (for performance analysis) o hierarchy (for the structuring of models, DFD's) Why do we need Petri Nets?  Petri Nets can be used to rigorously define a system (reducing ambiguity, making the operations of a system clear, allowing us to prove properties of a system etc.)  They are often used for distributed systems (with several subsystems acting independently) and for systems with resource sharing.  Since there may be more than one transition in the Petri Net active at the same time (and we do not know which will ‗fire‘ first), they are non-deterministic. A Petri net is a network composed of places ( ) and transitions 52 Connections are directed and between a place and a transition, or a transition and a place (e.g. Between ―p1 and t1‖ or ―t1 and p2‖ above). Tokens () are the dynamic objects. Enabling Condition  Transitions are the active components and places and tokens are passive components.  A transition is enabled if each of the input places contains tokens. Transition t1 is not enabled, transition t2 is enabled. Non-Determinism in Petri Nets Two transitions fight for the same token: conflict. Even if there are two tokens, there is still a conflict. The next transition to fire (t1 or t2) is arbitrary (non-deterministic). Data Dictionary I. A tool for recording and processing information (metadata) about the data that an organization uses. II. A central catalogue for metadata. III. Can be integrated within the DBMS or be separate. IV. May be referenced during system design, programming, and by actively-executing programs. V. Can be used as a repository for common code (e.g. library routines). 55 and components. • A design should lead to data structures that are appropriate for the classes to be implemented and are drawn from recognizable data patterns. • A design should lead to components that exhibit independent functional characteristics. • A design should lead to interfaces that reduce the complexity of connections between components and with the external environment. • A design should be derived using a repeatable method that is driven by information obtained during software requirements analysis. • A design should be represented using a notation that effectively communicates its meaning. • Quality attributes • Functionality is assessed by evaluating the feature set and capabilities of the program, the generality of the functions that are derived and the security of the overall system • Usability is assessed by considering human factors, overall aesthetics, consistency and documentation • Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of output results, the mean-time-to-failure, the ability to recover form failure, and the predictability of the program • Performance is measured by processing speed, response time, resource consumption, throughput, and efficiency • Supportability combines the ability to extend the program, adaptability, serviceability, testability, compatibility, configurability, the ease with which a system can be installed, and the ease with which problems can be localized 2. The Evolution of Software Design: • The evolution of software design is a continuing process that has now spanned almost six decades. • All these methods have a number of common characteristics: 1. A mechanism for the translation of the requirements model into a design representation, 2. A notation for representing functional components and their interfaces, 3. Heuristics for refinement and partitioning, and 56 4. Guidelines for quality assessment. Regardless of the design method that is used, you should apply a set of basic concepts to data, architectural, interface, and component-level design. These concepts are considered in the sections that follow Design Concepts: 1. Abstraction – Abstraction is the process by which data and programs are defined with a representation similar in form to its meaning (semantics), while hiding away the implementation details. – Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time – At the highest level of abstraction, a solution is stated in broad terms using the language of the problem environment. At the lower levels of abstraction, a more detailed description of the solution is provided. Abstraction can be • Data abstraction is a named collection of data that describes a data object. Data abstraction for ‗door‘ would encompass a set of attributes that describe the door (e.g. door type, swing direction, opening mechanism, weight, dimensions). • The procedural abstraction ‗open‘ would make use of information contained in the attributes of the data abstraction ‗door‘ • Procedural abstraction refers to a sequence of instructions that have as specific and limited function. The name of the procedural abstraction implies these functions, but specific details are suppressed. e.g. ‗open‘ for a door. ‗open‘ implies a long sequence of procedural steps (e.g. walk to the door, reach out and grasp knob, turn knob, turn knob and pull door, step away from moving door, etc) 57 2. Architecture – Architecture is the structure or organization of program components (modules), the manner in which these components interact, and the structure of data that are used by the components. Architectural design can be represented using , – Structural models represent architecture as an organized collection of program components – Framework models increase the level of design abstraction by attempting to identify repeatable architectural design frameworks that are encountered in similar types of application – Dynamic models address the behavioural aspects of the program architecture, indicating how the structure or system configuration may change as a function of external events – Procedural models focus on the design of the business or technical process that the system must accommodate – Function models can be used to represent the functional hierarchy of a system 3. Patterns – Design pattern describes a design structure that solves a particular design problem within a specific context. – Each design pattern is to provide a description that enables a designer to determine • Whether the pattern is applicable to the current work • Whether the pattern can be reused • Whether the pattern can serve as a guide for developing a similar, but functionally or structurally different pattern 4. Separation of Concerns • Separation of concerns is a design concept that suggests that any complex problem can be more easily handled if it is subdivided into pieces that can each be solved and/or optimized independently. 60 9. Refactoring – Refactoring is the process of changing a software system in such a way that it does not alter the external behaviour of the code yet improves its internal structure – – When software is refactored, the existing design is examined for redundancy, unused design elements, inefficient or unnecessary algorithms, poorly constructed or inappropriate data structures, or any other design failure that can be corrected to yield a better design 10. Aspects • As requirements analysis occurs, a set of ―concerns‖ is uncovered. These concerns ―include requirements, use cases, features, data structures, quality-of-service issues, variants, intellectual property boundaries, collaborations, patterns and contracts‖ • Ideally, a requirements model can be organized in a way that allows you to isolate each concern (requirement) so that it can be considered independently. • In practice, however, some of these concerns span the entire system and cannot be easily compartmentalized. As design begins, requirements are refined into a modular design representation. • Consider two requirements, A and B. Requirement A crosscuts requirement B ―if a software decomposition [refinement] has been chosen in which B cannot be satisfied without taking A into account‖. 11. Refactoring • An important design activity suggested for many agile methods, refactoring is a reorganization technique that simplifies the design (or code) of a component without changing its function or behavior • When software is refactored, the existing design is examined for redundancy, unused design elements, inefficient or unnecessary algorithms, poorly constructed or inappropriate data structures, or any other design failure that can be corrected to yield a better design. 12. Object-Oriented Design Concepts • The object-oriented (OO) paradigm is widely used in modern software engineering. • OO design concepts such as classes and objects, inheritance, messages, and polymorphism, among others. • 61 13. Design classes – As the design model evolves, a set of design classes are to be defined that • Refine the analysis classes by providing design detail that will enable the classes to be implemented • Create a new set of design classes that implement a software infrastructure to support the business solution The Design Model: Design model can be viewed as – Process dimension indicating the evolution of the design model as design tasks are executed as part of the software process. – Abstraction dimension represents the level of detail as each element of the analysis model is transformed into a design equivalent and then refined iteratively Design Model Elements are as follows • Data design elements • Architectural design elements • Interface design elements • Component-level design elements • Deployment-level design elements process dimension archit ect ure element s int erface element s component -level element s deployment -level element s low high class diagrams analysis packages CRC models collaborat ion diagrams use-cases - t ext use-case diagrams act ivit y diagrams sw im lane diagrams collaborat ion diagrams dat a f low diagrams cont rol- f low diagrams processing narrat ives dat a f low diagrams cont rol- f low diagrams processing narrat ives st at e diagrams sequence diagrams st at e diagrams sequence diagrams design class realizat ions subsyst ems collaborat ion diagrams design class realizat ions subsyst ems collaborat ion diagrams ref inement s t o: deployment diagrams class diagrams analysis packages CRC models collaborat ion diagrams component diagrams design classes act ivit y diagrams sequence diagrams ref inement s t o: component diagrams design classes act ivit y diagrams sequence diagrams design class realizat ions subsyst ems collaborat ion diagrams component diagrams design classes act ivit y diagrams sequence diagrams a na ly sis mode l de sign mode l Requirement s: const raint s int eroperabilit y t arget s and conf igurat ion t echnical int erf ace design Navigat ion design GUI design 1. Data design elements – Data design creates a model of data and/or information that is represented at a high 62 level of abstraction. – Data model is then refined into progressively more implementation-specific representations that can be processed by the computer-based system Architectural level  databases and files Component level  data structures 2. Architectural design elements • The architectural design for software is the equivalent to the floor plan of a house. The floor plan depicts the overall layout of the rooms; their size, shape, and relationship to one another; and the doors and windows that allow movement into and out of the rooms. The floor plan gives us an overall view of the house. Architectural design elements give us an overall view of the software. – The architectural model is derived from • Information about the application domain for the software to be built • Specific requirements model elements such as data flow diagrams or analysis classes, their relationships and collaborations for the problem at hand • The availability of architectural patterns and styles 3. Interface design elements – The interface design elements for software tell how information flows into and out of the system and how it is communicated among the components defined as part of the architecture – Important elements of interface design • The user interface (UI): Usability design incorporates aesthetic elements (e.g., layout, color, graphics, interaction mechanisms), ergonomic elements (e.g., information layout and placement, metaphors, UI navigation), and technical elements (e.g., UI patterns, reusable components). In general, the UI is a unique subsystem within the overall application architecture. • External interfaces to other systems, devices, networks or other producers or consumers of informationThe design of external interfaces requires definitive information about the entity to which information is sent or received. • Internal interfaces between various design componentsThe design of internal interfaces is closely aligned with component-level design 65 Architectural Descriptions • Each of us has a mental image of what the word architecture means. In reality, however, it means different things to different people. • The implication is that different stakeholders will see an architecture from different viewpoints that are driven by different sets of concerns. • An architectural description is actually a set of work products that reflect different views of the system. • An architectural description of a software-based system must exhibit characteristics that are analogous to those noted for the office building. • Developers want clear, decisive guidance on how to proceed with design. • Customers want a clear understanding on the environmental changes that must occur and assurances that the architecture will meet their business needs. • Other architects want a clear, salient understanding of the architecture‘s key aspects.‖ Each of these ―wants‖ is reflected in a different view represented using a different viewpoint. Architectural Decisions • Each view developed as part of an architectural description addresses a specific stakeholder concern. • To develop each view (and the architectural description as a whole) the system architect considers a variety of alternatives and ultimately decides on the specific architectural features that best meet the concern. • Therefore, architectural decisions themselves can be considered to be one view of the architecture. • The reasons that decisions were made provide insight into the structure of a system and its conformance to stakeholder concerns. Software Architectural Styles 1. A Brief Taxonomy of Architectural Styles 2. Architectural Patterns 3. Organization and Refinement • The software that is built for computer-based systems exhibit one of many architectural styles 66 • Each style describes a system category that encompasses – A set of component types that perform a function required by the system – A set of connectors (subroutine call, remote procedure call, data stream, socket) that enable communication, coordination, and cooperation among components – constraints that define how components can be integrated to form the system; – semantic models that enable a designer to understand the overall properties of a system by analyzing the known properties of its constituent parts. A Brief Taxonomy of Architectural Styles • Data-centered architectures. A data store (e.g., a file or database) resides at the center of this architecture and is accessed frequently by other components that update, add, delete, or otherwise modify data within the store. • Illustrates a typical data-centered style. Client software accesses a central repository. In some cases the data repository is passive. That is, client software accesses the data independent of any changes to the data or the actions of other client software. A variation on this approach transforms the repository into a ―blackboard 67 Data-Centered Style Data-flow architectures • This architecture is applied when input data are to be transformed through a series of computational or manipulative components into output data. • A pipe-and-filter pattern show has a set of components, called filters, connected by pipes that transmit data from one component to the next. • Each filter works independently of those components upstream and downstream, is designed to expect data input of a certain form, and produces data output (to the next filter) of a specified form. • However, the filter does not require knowledge of the workings of its neighboring filters. Call and return architectures. • This architectural style enables you to achieve a program structure that is relatively easy to modify and scale. • A number of substyles exist within this category: • Main program/subprogram architectures: This classic program structure decomposes function into a control hierarchy where a ―main‖ program invokes a number of program components that in turn may invoke still other components. Figure illustrates architecture of this type. 70 Architectural design • As architectural design begins, the software to be developed must be put into context—that is, the design should define the external entities (other systems, devices, people) that the software interacts with and the nature of the interaction. 1. Represent the system in context 2. Define archetypes 3. Refine the architecture into components 4. Describe instantiations of the system 1. Represent the System in Context • Use an architectural context diagram (ACD) that shows – The identification and flow of all information into and out of a system – The specification of all interfaces – Any relevant support processing from/by other systems • An ACD models the manner in which software interacts with entities external to its boundaries • An ACD identifies systems that interoperate with the target system – Super-ordinate systems • Use target system as part of some higher level processing scheme – Sub-ordinate systems • those systems that are used by the target system and provide data or processing that are necessary to complete target system functionality - Peer-level systems • Interact on a peer-to-peer basis with target system to produced or consumed by peers and target system 71 – Actors • People or devices that interact with target system to produce or consume data 2. Define Archetypes • Archetypes indicate the important abstractions within the problem domain (i.e., they model information) • An archetype is a class or pattern that represents a core abstraction that is critical to the design of an architecture for the target system • Only a relatively small set of archetypes is required in order to design even relatively complex systems • The target system architecture is composed of these archetypes – They represent stable elements of the architecture – They may be instantiated in different ways based on the behavior of the system – They can be derived from the analysis class model • The archetypes and their relationships can be illustrated in a UML class diagram Archetypes in Software Architecture • Node - Represents a cohesive collection of input and output elements of the home security function • Detector/Sensor - An abstraction that encompasses all sensing equipment that feeds information into the target system. • Indicator - An abstraction that represents all mechanisms (e.g., alarm siren, flashing lights, bell) for indicating that an alarm condition is occurring. • Controller - An abstraction that depicts the mechanism that allows the arming or disarming of a node. If controllers reside on a network, they have the ability to communicate with one another. 72 Archetypes – their attributes Archetypes – their methods 3. Refine the Architecture into Components • Based on the archetypes, the architectural designer refines the software architecture into components to illustrate the overall structure and architectural style of the system • These components are derived from various sources – The application domain provides application components, which are the domain classes in the analysis model that represent entities in the real world – The infrastructure domain provides design components (i.e., design classes) that enable application components but have no business connection 75 Architectural Mapping using Data Flow • Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to be mapped into a specific architectural style. – Information must enter and exit software in an ―external world‖. Such externalized data must be converted into an internal form for processing. Information enters along paths that transform external data into an internal form. These paths are identified are Incoming flow. – Incoming data are transformed through a transform center and move along the paths that now lead ―out‖ of the software. Data moving along these paths are called Outgoing flow. • Transaction Flow – Information flow is often characterized by a single data item, called transaction, that triggers other data flow along one of many paths – Transaction flow is characterized by data moving along an incoming path that converts external world information into a transaction – The transaction is evaluated and, based on its value, flow along one of many action paths is initiated. The hub of information from which many action paths emanate is called a transaction center Transform Mapping 1. Review the fundamental system model. 2. Review and refine data flow diagrams for the software 3. Determine whether the DFD has transform or transaction flow characteristics. 4. Isolate the transform center by specifying incoming and outgoing flow boundaries. 5. Perform ―first-level factoring‖ 6. Perform ―second-level factoring‖ 7. Refine the first-iteration architecture using design heuristics for improved software quality. 76 Transform Mapping data flow model "Transform" mapping a b c d e f g h i j x1 x2 x3 x4 b c a d e f g i h j Factoring 77 Transaction Mapping 1. Review the fundamental system model. 2. Review and refine data flow diagrams for the software 3. Determine whether the DFD has transform or transaction flow characteristics. 4. Isolate the transaction center and the flow characteristics along each of the action paths. 5. Map the DFD in a program structure amenable to transaction processing. 6. Factor and refine the transaction structure and the structure of each action path. 7. Refine the first-iteration architecture using design heuristics for improved software quality. 80  The user‘s mental image may be vastly different from the software engineer‘s design model. Information from a broad array of sources can be used to accomplish this:  User Interviews  Sales input  Marketing input  Support input 2. Task Analysis and Modeling: The goal of task analysis is to answer the following questions: • What work will the user perform in specific circumstances? • What tasks and subtasks will be performed as the user does the work? • What specific problem domain objects will the user manipulate as work is performed? • What is the sequence of work tasks—the workflow? • What is the hierarchy of tasks? Techniques that are applied to the user interface  Use cases  Task elaboration  Object elaboration  Workflow analysis  Hierarchical representation  Key interface characteristics: 1. Each user implements different tasks via the interface; therefore, the look and feel of the interface designed for the patient will be different than the one defined for pharmacists or physicians. 2. The interface design for pharmacists and physicians must accommodate access to and display of information from secondary information sources (e.g., access to inventory for the pharmacist and access to information about alternative medications for the physician). 3. Analysis of Display Content For modern applications, display content can range from character-based reports (e.g., a spreadsheet), graphical displays (e.g., a histogram, a 3-D model, a picture of a person), or specialized information (e.g., audio or video files). These data objects may be (1) Generated by components (unrelated to the interface) in other parts of an application (2) Acquired from data stored in a database that is accessible from the application (3) Transmitted from systems external to the application in question. How do we determine the format and aesthetics of content displayed as part of the UI? [  Are different types of data assigned to consistent geographic locations on the screen (e.g., photos always appear in the upper right-hand corner)? • Can the user customize the screen location for content? • Is proper on-screen identification assigned to all content? 81 • If a large report is to be presented, how should it be partitioned for ease of understanding? • Will mechanisms be available for moving directly to summary information for large collections of data? • Will graphical output be scaled to fit within the bounds of the display device that is used? • How will color be used to enhance understanding? • How will error messages and warnings be presented to the user? The answers to these (and other) questions will help you to establish requirements for content presentation. 5. Analysis of the Work Environment  In some applications the user interface for a computer-based system is placed in a ―user-friendly location‖ (e.g., proper lighting, good display height, easy keyboard access), but in others (e.g., a factory floor or an airplane cockpit), lighting may be suboptimal, noise may be a factor, a keyboard or mouse may not be an option, display placement may be less than ideal.  The interface designer may be constrained by factors that mitigate against ease of use.  In addition to physical environmental factors, the workplace culture also comes into play.  Will system interaction be measured in some manner (e.g., time per transaction or accuracy of a transaction)? Will two or more people have to share information before an input can be provided?  How will support be provided to users of the system? Interface Design Steps Interface Design Steps: 1. Applying Interface Design Steps 2. User Interface Design Patterns 3. Design Issues 1. Applying Interface Design Steps  The definition of interface objects and the actions that are applied to them is an important step in interface design.  Once the objects and actions have been defined and elaborated iteratively, they are categorized by type. Target, source, and application objects are identified.  A source object (e.g., a report icon) is dragged and dropped onto a target object (e.g., a printer icon).  An application object represents application-specific data that are not directly manipulated as part of screen interaction.  For example, a mailing list is used to store names for a mailing. The list itself might be sorted, merged, or purged (menu-based actions), but it is not dragged and dropped via user interaction. 82 2. User Interface Design Patterns  Graphical user interfaces have become so common that a wide variety of user interface design patterns has emerged.  A design pattern is an abstraction that prescribes a design solution to a specific, well-bounded design problem.  A vast array of interface design patterns has been proposed over the past decade. 3. Design Issues As the design of a user interface evolves, four common design issues almost always surface: 1. System response time 2. User help facilities 3. Error information handling 4. Command labeling Designing class based components Designing Class Based Components 1. Basic Design Principles 2. Component-Level Design Guidelines 3. Cohesion 4. Coupling Component-level design focuses on the elaboration of analysis classes (problem domain specific classes) and definition and refinement of infrastructure classes Purpose of using design principles is to create designs that are more amenable to change and to reduce propagation of side effects when changes do occur 1. Basic Design Principles  Single Responsibility Principle  Open-Closed Principle  Liskov Substitution Principle  Dependency inversion Principle  Interface segregation Principle 2. Component-Level Design Guidelines In addition to the principles discussed, a set of pragmatic design guidelines can be applied as component-level design proceeds. These guidelines apply to components, their interfaces, and the dependencies and inheritance and inheritance characteristics that have an impact on the resultant design.  Components: Naming conventions should be established for components that are specified as part of the architectural model and then refined and elaborated as part of the component-level model. 85 control. The sequence is represented as two processing boxes connected by a line (arrow) of control. 2. Tabular Design Notation The following steps are applied to develop a decision table: 1. List all actions that can be associated with a specific procedure (or module) 2. List all conditions (or decisions made) during execution of the procedure 3. Associate specific sets of conditions with specific actions, eliminating impossible combinations of conditions; alternatively, develop every possible permutation of conditions 4. Define rules by indicating what action(s) occurs for a set of conditions. 3. Program Design Language • Program design language (PDL), also called structured English or pseudocode, incorporates the logical structure of a programming language with the free-form expressive ability of a natural language (e.g., English). • Narrative text (e.g., English) is embedded within a programming language-like syntax. Automated tools (e.g., [Cai03]) can be used to enhance the application of PDL. • A basic PDL syntax should include constructs for component definition, interface description, data declaration, block structuring, condition constructs, repetition constructs, and input-output (I/O) constructs. • PDL can be extended to include keywords for multitasking and/or concurrent processing, interrupt handling, interprocess synchronization, and many other features. 86 UNIT- IV: Testing and Implementation : Software testing fundamentals: Internal and external views of testing, white box testing, basis path testing, control structure testing, black box testing, regression testing, unit testing, integration testing, validation testing, system testing and debugging; Software implementation techniques: Coding practices, refactoring. Software Testing Fundamentals The goal of testing is to find errors, and a good test is one that has a high probability of finding an error. Therefore, you should design and implement a computer based system or a product with ―testability‖ in mind. At the same time, the tests themselves must exhibit a set of characteristics that achieve the goal of finding the most errors with a minimum of effort. Testability. James provides the following definition for testability: ―Software testability is simply how easily [a computer program] can be tested.‖ The following characteristics lead to testable software. Operability. ―The better it works, the more efficiently it can be tested.‖ If a system is designed and implemented with quality in mind, relatively few bugs will block the execution of tests, allowing testing to progress without fits and starts. Observability. ―What you see is what you test.‖ Inputs provided as part of testing produce distinct outputs. System states and variables are visible or queriable during execution. Incorrect output is easily identified. Internal errors are automatically detected and reported. Source code is accessible. Controllability. ―The better we can control the software, the more the testing can be automated and optimized.‖ All possible outputs can be generated through some combination of input, and I/O formats are consistent and structured. All code is executable through some combination of input. Software and hardware states and variables can be controlled directly by the test engineer. Tests can be conveniently specified, automated, and reproduced. Decomposability. ―By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting.‖ The software system is built from independent modules that can be tested independently. Simplicity. ―The less there is to test, the more quickly we can test it.‖ The program should exhibit functional simplicity (e.g., the feature set is the minimum necessary to meet requirements); structural simplicity (e.g., architecture is modularized to limit the propagation of faults), and code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance). Stability. ―The fewer the changes, the fewer the disruptions to testing.‖ Changes to the software are infrequent, controlled when they do occur, and do not invalidate existing tests. The software recovers well from failures. Understandability. ―The more information we have, the smarter we will test.‖ The architectural design and the dependencies between internal, external, and shared components are well understood. Technical documentation is instantly accessible, well organized, specific and detailed, and accurate. Changes to the design are communicated to testers. 87 What is a good Test? Test Characteristics. And what about the tests themselves? Kaner, Falk, and Nguyen [Kan93] suggest the following attributes of a ―good‖ test: A good test has a high probability of finding an error. To achieve this goal, the tester must understand the software and attempt to develop a mental picture of how the software might fail. Ideally, the classes of failure are probed. For example, one class of potential failure in a graphical user interface is the failure to recognize proper mouse position. A set of tests would be designed to exercise the mouse in an attempt to demonstrate an error in mouse position recognition. A good test is not redundant. Testing time and resources are limited. There is no point in conducting a test that has the same purpose as another test. Every test should have a different purpose (even if it is subtly different). A good test should be ―best of breed‖ [Kan93]. In a group of tests that have a similar intent, time and resource limitations may mitigate toward the execution of only a subset of these tests. In such cases, the test that has the highest likelihood of uncovering a whole class of errors should be used. A good test should be neither too simple nor too complex. Although it is sometimes possible to combine a series of tests into one test case, the possible side effects associated with this approach may mask errors. In general, each test should be executed separately. Internal and External Views of Testing • Any engineered product (and most other things) can be tested in one of two ways: (1) Knowing the specified function that a product has been designed to perform, tests can be conducted that demonstrate each function is fully operational while at the same time searching for errors in each function. (2) Knowing the internal workings of a product, tests can be conducted to ensure that ―all gears mesh,‖ that is, internal operations are performed according to specifications and all internal components have been adequately exercised. • The first test approach takes an external view and is called black-box testing. The second requires an internal view and is termed white-box testing. • Black-box testing alludes to tests that are conducted at the software interface. A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software. • White-box testing of software is predicated on close examination of procedural detail. Logical paths through the software and collaborations between components are tested by exercising specific sets of conditions and/or loops. • White-box testing would lead to ―100 percent correct programs.‖ need do is define all logical paths, develop test cases to exercise them, and evaluate results, i.e, generate test cases to exercise program logic exhaustively. • A limited number of important logical paths can be selected and exercised. Important data structures can be probed for validity. 90 2. Independent Program Paths • An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. • Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity if a program. • When used in the context of the basis path testing method, the value computed for Cyclomatic complexity defines the number of independent paths in the basis set of a program and provides you with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once. • Cyclomatic complexity has a foundation in graph theory and provides you with an extremely useful software metric. Complexity is computed in one of three ways: 1. The number of regions of the flow graph corresponds to the Cyclomatic complexity. 2. Cyclomatic complexity V(G) for a flow graph G is defined as V(G) =E – N + 2 where E is the number of flow graph edges and N is the number of flow graph nodes. 3. Cyclomatic complexity V(G) for a flow graph G is also defined as V(G) = P+ 1 where P is the number of predicate nodes contained in the flow graph G. 3. Deriving Test Cases The following steps can be applied to derive the basis set: 1. Using the design or code as a foundation, draw a corresponding flow graph. 2. Determine the Cyclomatic complexity of the resultant flow graph. 3. Determine a basis set of linearly independent paths. 4. Prepare test cases that will force execution of each path in the basis set. 4. Graph Matrices: • A data structure, called a graph matrix, can be quite useful for developing a software tool that assists in basis path testing. • A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on the flow graph. 91 • Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes. • A simple example of a flow graph and its corresponding graph matrix is shown in Figure. • Referring to the figure, each node on the flow graph is identified by numbers, while each edge is identified by letters. • A letter entry is made in the matrix to correspond to a connection between two nodes. For example, node 3 is connected to node 4 by edge b. • The graph matrix is nothing more than a tabular representation of a flow graph. • By adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating program control structure during testing. • The link weight provides additional information about control flow. In its simplest form, the link weight is 1 (a connection exists) or 0 (a connection does not exist). Control Structure Testing: • Although basis path testing is simple and highly effective, it is not sufficient in itself. • Other variations on control structure testing necessary. These broaden testing coverage and improve the quality of white-box testing. 1. Condition testing: • Condition testing is a test-case design method that exercises the logical conditions contained in a program module. • A simple condition is a Boolean variable or a relational expression, possibly preceded with one NOT (¬) operator. • A relational expression takes the form E1<relational-operator> E2 • Where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following: • A compound condition is composed of two or more simple conditions, Boolean operators, and parentheses. • The condition testing method focuses on testing each condition in the program to ensure that it does not contain errors. 92 2. Data Flow Testing • The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program. T • To illustrate the data flow testing approach, assume that each statement in a program is assigned a unique statement number and that each function does not modify its parameters or global variables. • For a statement with S as its statement number, • DEF(S)= {X | statement S contains a definition of X} • USE(S) = {X | statement S contains a use of X} • If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the condition of statement S. The definition of variable X at statement S is said to be live at statement S‘ if there exists a path from statement S to statement S‘ that contains no other definition of X. 3. Loop Testing • Loops are the cornerstone for the vast majority of all algorithms implemented in software. • Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs. • Four different classes of loops can be defined: 1. Simple loops: The following set of tests can be applied to simple loops, where n is the maximum number of allowable passes through the loop. 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes through the loop. 4. m passes through the loop where m < n. 5. n - 1, n, n + 1 passes through the loop. 95 2. Equivalence Partitioning • Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. • Test-case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. • Equivalence classes may be defined according to the following guidelines:  If an input condition specifies a range, one valid and two invalid equivalence classes are defined.  If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.  If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined.  If an input condition is Boolean, one valid and one invalid class are defined. 3. Boundary Value Analysis • A greater number of errors occurs at the boundaries of the input domain rather than in the ―center of input domain. • For this reason that boundary value analysis (BVA) has been developed as a testing technique • Boundary value analysis leads to a selection of test cases that exercise bounding values. • BVA leads to the selection of test cases at the ―edges‖ of the class. Rather than focusing solely on input conditions. • BVA derives test cases from the output domain also. • Guidelines for BVA are similar in many respects to those provided for equivalence partitioning: 1. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b. 2. If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. 3. Apply guidelines 1 and 2 to out put conditions. For example, assume that a temperature versus pressure table is required as output from an engineering analysis program. Test cases should be designed to create an output report that produces the maximum(and minimum)allowable number of table entries. 4. If internal program data structures have prescribed boundaries (e.g., a table has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary. Most software engineers intuitively perform BVA to some degree. By applying these guidelines, boundary testing will be more complete, thereby having a higher likelihood for error detection. 96 4. Orthogonal Array Testing • Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing. • The orthogonal array testing method is particularly useful in finding region faults—an error category associated with faulty logic within a software compone • For example, when a train ticket has to be verified, the factors such as - the number of passengers, ticket number, seat numbers and the train numbers has to be tested, which becomes difficult when a tester verifies input one by one. Hence, it will be more efficient when he combines more inputs together and does testing. Here, use the Orthogonal Array testing method. • When orthogonal array testing occurs, an L9 orthogonal array of test cases is created. • The L9 orthogonal array has a ―balancing property‖. • That is, test cases (represented by dark dots in the figure) are ―dispersed uniformly throughout the test domain,‖ as illustrated in the right-hand cube in Figure. • To illustrate the use of the L9 orthogonal array, consider the send function for a fax application. • Four parameters, P1, P2, P3, and P4, are passed to the send function. Each takes on three discrete values. For example, P1 takes on values: • P1 = 1, send it now : P1 = 2, send it one hour later : P1 = 3, send it after midnight • P2, P3, and P4 would also take on values of 1, 2, and 3, signifying other send functions. • If a ―one input item at a time‖ testing strategy were chosen, the following sequence of tests (P1,P2,P3,P4) would be specified: (1,1,1,1),(2,1,1,1),(3,1,1,1), (1, 2, 1, 1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 3, 1), (1, 1, 1, 2), and (1, 1, 1, 3). • The orthogonal array testing approach enables you to provide good test coverage with far fewer test cases than the exhaustive strategy. An L9 orthogonal array for the fax send function is illustrated in Figure. 97 Regression testing • When any modification or changes are done to the application or even when any small change is done to the code then it can bring unexpected issues. Along with the new changes it becomes very important to test whether the existing functionality is intact or not. This can be achieved by doing the regression testing. • The purpose of the regression testing is to find the bugs which may get introduced accidentally because of the new changes or modification. • During confirmation testing the defect got fixed and that part of the application started working as intended. But there might be a possibility that the fix may have introduced or uncovered a different defect elsewhere in the software. The way to detect these ‗unexpected side-effects‘ of fixes is to do regression testing. • This also ensures that the bugs found earlier are NOT creatable. • Usually the regression testing is done by automation tools because in order to fix the defect the same test is carried out again and again and it will be very tedious and time consuming to do it manually. • During regression testing the test cases are prioritized depending upon the changes done to the feature or module in the application. The feature or module where the changes or modification is done that entire feature is taken into priority for testing. • This testing becomes very important when there are continuous modifications or enhancements done in the application or product. These changes or enhancements should NOT introduce new issues in the existing tested code. • This helps in maintaining the quality of the product along with the new changes in the application. • Example: Let‘s assume that there is an application which maintains the details of all the students in school. This application has four buttons Add, Save, Delete and Refresh. All the buttons functionalities are working as expected. Recently a new button ‗Update‘ is added in the application. This ‗Update‘ button functionality is tested and confirmed that it‘s working as expected. But at the same time it becomes very important to know that the introduction of this new button should not impact the other existing buttons functionality. Along with the ‗Update‘ button all the other buttons functionality are tested in order to find any new issues in the existing code. This process is known as regression testing. When to use Regression testing it: 1. Any new feature is added 2. Any enhancement is done 3. Any bug is fixed 4. Any performance related issue is fixed Advantages of Regression testing: • It helps us to make sure that any changes like bug fixes or any enhancements to the module or application have not impacted the existing tested code. • It ensures that the bugs found earlier are NOT creatable. • Regression testing can be done by using the automation tools • It helps in improving the quality of the product.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved