Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Chapter 14 Proposed Systems, Study notes of Computer Networks

Proposed systems for university courses related to computer science and software development. It covers topics such as system design, development methodologies, and user interface design. The document also includes case studies and examples of successful systems. The University of California, Berkeley is likely to have courses related to these topics. The document could be useful as study notes or a summary for a computer science student preparing for an exam. Typology: study notes

Typology: Study notes

2022/2023

Uploaded on 05/11/2023

teap1x
teap1x 🇺🇸

4.7

(15)

12 documents

1 / 15

Toggle sidebar

Partial preview of the text

Download Chapter 14 Proposed Systems and more Study notes Computer Networks in PDF only on Docsity! Chapter 14 Proposed Systems 14.1. Introduction The preceding two chapters have discussed the parameterization of queueing network models of existing systems and evolving systems. In this chapter we consider models of proposed systems: major new systems and subsystems that are undergoing design and implementation. The process of design and implementation involves continual tradeoffs between cost and performance. Quantifying the performance implications of various alternatives is central to this process. It also is extremely chal- lenging. In the case of existing systems, measurement data is available. In the case of evolving systems, contemplated modifications often are straightforward (e.g., a new CPU within a product line), and limited experimentation may be possible in validating a baseline model. In the case of proposed systems, these advantages do not exist. For this reason, it is tempting to rely on seat-of-the-pants performance projections, which all too often prove to be significantly in error. The consequences can be serious, for performance, like reliability, is best designed in, rather than added on. Recently, progress has been made in evolving a general framework for projecting the performance of proposed systems. There has been a confluence of ideas from software engineering and performance evalua- tion, with queueing network models playing a central role. The purpose of this chapter is to present the elements of this framework. In Section 14.2 we review some early efforts. In Section 14.3 we discuss, in a gen- eral setting, some of the components necessary to achieve a good under- standing of the performance of a proposed system. In Section 14.4 we describe two specific approaches. 320 14.2. Background 321 14.2. Background User satisfaction with a new application system depends to a significant extent on the system’s ability to deliver performance that is acceptable and consistent. In this section we describe several early attempts at assessing the performance of large systems during the design stage. Some common themes will be evident; these will be discussed in the next sec- tion. In the mid 1960s GECOS III was being designed by General Electric as an integrated batch and timesharing system. After the initial design was complete, two activities began in parallel: one team began the imple- mentation, while another developed a simulation model to project the effects of subsequent design and implementation decisions. The simulation modelling team came out second best. The model was not debugged until several months after a skeletal version of the actual system was operational. Thus, many of the design questions that might have been answered by the model were answered instead by the system. The model could not be kept current. The projections of the model were not trusted, because the system designers lacked confidence in the simu- lation methodology. This attempt to understand the interactions among design decisions throughout the project lifetime failed. Other attempts have been more successful. In the late 1960s TSO was being developed as a timesharing subsys- tem for IBM’s batch-oriented MVT operating system. During final design and initial implementation of the final system, an earlier prototype was measured in a test environment, and a queueing network model was parameterized from these measurements and from detailed specifications of the final design. The average response time projected by the model was- significantly lower than that measured for prototype. However, the design team had confidence in the model because a similar one had been used successfully for MIT’s CTSS system (see Section 6.3.1). The team checked the proto- type for conformance with specifications and detected a discrepancy: the scheduler had been implemented with an unnecessary locking mechanism that created a software bottleneck. When this was corrected, the projec- tions of the model and the behavior of the prototype were compatible. 324 Parameterization: Proposed Systems Various parts of this process would be repeated as the performance analysts seek additional information, as the design evolves, and as the results of the analysis indicate specific areas of concern. An important aspect of any tool embodying this approach is the support that it provides for this sort of iteration and successive refinement. It should be clear that what has been outlined is a methodical approach to obtaining queueing network model inputs, an approach that could be of value in any modelling study, not just an evaluation of a pro- posed system. (For example, see the case study in Section 2.4.) It also should be clear that this approach, since it forces meaningful communication between various “interested parties”, can be a valuable aid in software project management. 14.3.2. An Example Here is a simple example that illustrates the application of this general approach. A store-and-forward computer communication network is being designed. Our objective is to project the performance of this net- work, given information about the planned usage, the software design, and the supporting hardware. The topology (star> and the protocol (polling) of the network are known. The system is to support three kinds of messages: STORE, FORWARD, and FLASH. From the functional specifications, the arrival rate, priority, and response time requirement of each message type can be obtained. Each message type has different characteristics and represents a non-trivial portion of the workload, so it is natural to view each as a separate workload component and to assign each to a different class. Given knowledge of the intended protocol, a fourth class is formulated, representing polling overhead. Further refinements of this class structure are possible during project evolution. The software specifications for each class are imprecise in the initial stages, Only high-level information about software functionality, flow of control, and processing requirements are available. A gross estimate of CPU and I/O resource requirements for each class is obtained. The CPU requirement specifies an estimated number of instructions for each mes- sage of the type, and an estimated number of logical I/O operations. For STORE messages, as an example, the I/O consists of a read to an index to locate the message storage area, a write to store the message, and a write to update the index. No indication is given here about file place- ments or device characteristics, Instead, the logical properties of the software are emphasized, to serve as a basis for further refinement when the software design becomes more mature. 14.3. A General Framework 325 Physical device characteristics are identified: speed, capacity, file placement, etc. A CPU is characterized by its MIPS rate and its number of processors. A disk is characterized by its capacity, average seek time, rotation time, transfer rate, and the assignment of files to it. From con- sideration of the software specifications and the device characteristics, ser- vice demands can be estimated. As a simple example, a software designer may estimate 60,000 CPU instructions for a STORE message, and a hardware configuration analyst may estimate a CPU MIPS rate of .40. This leads to a STORE service demand for the CPU of .15 seconds. This admittedly is a crude estimate, but it serves as a basis, and more detail can be incorporated subsequently. At this point, a queueing network model of the design, incorporating classes, devices, and service demands, can be constructed and evaluated to give an initial assessment of performance. Alternatives can be evaluated to determine their effect on performance. Sensitivity analyses can be used to identify potential trouble spots, even at this early stage of the project. One of the strengths of this approach is the ability to handle easily changes in the workload, software, and hardware. In the example, no internal module flow of control was specified and processing requirements were gross approximations. As the design progresses, the individual modules begin to acquire a finer structure, as reflected in Figure 14.1. This can be reflected by modifying the software specifications. This struc- ture acquires multiple levels of detail as the design matures. The sub- modules at the leaves of the tree represent detailed information about a particular operation; the software designer has more confidence in the resource estimates specified for these types of modules. The total resource requirements for a workload are found by appropriately sum- ming the resource requirements at the various levels in the detailed module structure. Software specifications thus can be updated as more information becomes available. The important features we have illustrated in this example include the identification of workload, software, and hardware at the appropriate level of detail, the transformation of these high-level components into queue- ing network model parameters, and the ability to represent changes in the basic components. 14.3.3. Other Considerations The design stage of a proposed system has received most of our atten- tion. This is where the greatest leverage exists to change plans. How- ever, it is important to continue the performance projection effort during the life of the project. Implementation, testing, and 326 Parameterization: Proposed Systems FETCH-INDEX DETERMINE-MSG-DESTINATION STORE WRITE-MSG STORE-MSG UPDATE-INDEX WRITE-INDEX FLASH FETCH-INDEX DETERMINE-MSG-DESTINATION WRITE-MSG UPDATE-INDEX ALERT-DESTINATION WRITE-INDEX DETERMINE-MSG-DESTINATION FETCH-INDEX FORWARD READ-MSGS UPDATE-INDEX WRLTE-LNDEX Figure 14.1 - Refinement of Software Specifications maintenance/evolution follow design. Estimates indicate that the largest proportion of the cost of software comes from the maintenance/evolution stage. Given the desirability of tracking performance over the software life- time, it is useful to maintain a repository of current information about important aspects of the project (e.g., procedure structure within software modules). If the repository is automated in database form, software designers and implementors are more likely to keep it current. A prerequisite for the success of the approach we have outlined is that management be prepared to listen to the recommendations rather than adopting an expedient approach. Budgeting time and manpower for per- formance projection may lengthen the development schedule somewhat, but the benefits can be significant. 14.4. Tools and Techniques 329 14.4.2. ADEPT The second technique to be discussed is ADEPT (A Design-based Evaluation and Prediction Technique), developed in the late 1970s. Using ADEPT, resource requirements are specified both as average values and as maximum (upper bound) values. The project design is likely to be suitable if the performance specifications are satisfied for the upper bounds. Sensitivity analyses can show the system components for which more accurate resource requirements must be specified. These components should be implemented first, to provide early feedback and allow more accurate forecasts. The software structure of the proposed application is determined through performance walkthroughs and is described using a graph representation, with software components represented as nodes, and links between these components represented as arcs. Because the software design usually results from a top-down successive refinement process, these graphs are tree-structured, with greater detail towards the leaves. An example is found in Figure 14.2, where three design levels are shown. Each component that is not further decomposed has a CPU time estimate and a number of I/O accesses associated with it. The graphs are analyzed to determine elapsed time and resource requirements for the entire design by a bottom-up procedure. The time and resource requirements of the leaf nodes are used to calculate the requirements of the nodes one level up, and so on up to the root node. A static analysis, assuming no interference between modules, is per- formed to derive best case, average case, and worst case behavior. The visual nature of the execution graphs can help to point out design optimi- zations, such as moving invariant components out of loops. Additional techniques handle other software and hardware characteris- tics introduced as the design matures. These characteristics include data dependencies (for which counting parameters are introduced), competi- tion for resources (for which queueing network analysis software is used), and concurrent processing (in which locking and synchronization are important). ADEPT was used to project the performance of a database component of a proposed CAD/CAM system. Only preliminary design specifications were available, including a high-level description of the major functional modules. ~A small example from that study will be discussed. A transac- tion builds a list of record occurrences that satisfy given qualifications, and returns the first qualified occurrences to the user at a terminal. It 330 Parameterization: Proposed Systems Top level: QUERY parser I Send message DB control system -1 LOCATE descriptive data I Retrieve descriptive data Second level: FETCH internal parts Third level: LOCATE subassembly I- Retrxve subassembly i LOCATE pieces j of subassembly i I Sort lists + Figure 14.2 - Example Execution Graphs 14.4. Tools and Techniques 331 issues FIND FIRST commands to qualify record occurrences and FIND NEXT commands to return the occurrences. The execution graphs for the FIND commands have the structure shown in Figure 14.3. The performance goal for processing this transaction was an average response time of under 5 seconds, when the computing environment was a Cyber 170 computer running the NOS operating system. A perfor- mance walkthrough produced a typical usage scenario from an engineer- ing user and descriptions of the processing steps for the FIND commands from a software designer. Resource estimates for the transaction com- ponents were based on the walkthrough information. Many optimistic assumptions were made, but the best case response time was predicted to be 6.1 seconds, not meeting the goal (see Figure 14.3). About 43% of this elapsed time (2.6 seconds) was actual CPU requirement. Thus, it was clear at the design stage that response times would be unacceptably long because of excessive CPU requirements. Fetch first ret 1 where assembly = 43 120 t Fetch first ret 2 where assembly = 43 120 t Fetch first ret 3 where assembly = 43 120 t Fetch next ret 1 + Fetch next ret 3 CPU (sets) 0.488 0.488 0.488 0.116 Average 0.116 Total 0.464 Average 0.116 Total 0.580 2.624 I/OS Elapsed (sets) 27 27 27 1 1 4 1 5 - 91 1.514 1.514 1.514 0.154 0.154 0.616 0.154 0.770 6.082 Figure 14.3 - Transaction Steps and Projections
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved