Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecturer notes for IO management and disk scheduling in operating systems, Lecture notes of Operating Systems

IO management and disk scheduling

Typology: Lecture notes

2018/2019

Uploaded on 08/27/2019

moe-myint-myint
moe-myint-myint 🇬🇧

1 document

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Lecturer notes for IO management and disk scheduling in operating systems and more Lecture notes Operating Systems in PDF only on Docsity! Chapter 11 I/O Management and Disk Scheduling External devices that engage in I/O with computer systems can be grouped into three categories: Human readable: Suitable for communicating with the computer user. Examples include printers and terminals, the latter consisting of video display, keyboard, and perhaps other devices such as a mouse. Machine readable: Suitable for communicating with electronic equipment. Examples are disk drives, USB keys, sensors, controllers, and actuators. Communication: Suitable for communicating with remote devices. Examples are digital line drivers and modems. ---------------------------------------------------------------------------------------------------------------- There are great differences across classes and even substantial differences within each class. Among the key differences are the following: Data rate: There may be differences of several orders of magnitude between the data transfer rates. Application: The use to which a device is put has an influence on the software and policies in the operating system and supporting utilities. For example, a disk used for files requires the support of file management software. Complexity of control: A printer requires a relatively simple control interface. A disk is much more complex. The effect of these differences on the operating system is filtered to some extent by the complexity of the I/O module that controls the device. Unit of transfer: Data may be transferred as a stream of bytes or characters. (e.g., terminal I/O) or in larger blocks (e.g., disk I/O). Data representation: Different data encoding schemes are used by different devices, including differences in character code and parity conventions. Error conditions: The nature of errors, the way in which they are reported, their consequences, and the available range of responses differ widely from one device to another. ---------------------------------------------------------------------------------------------------------------- Three techniques for performing I/O Programmed I/O ▲ the processor issues an I/O command on behalf of a process to an I/O module; that process then busy waits for the operation to be completed before proceeding Interrupt-driven I/O ▲ the processor issues an I/O command on behalf of a process ▲ if non-blocking, processor continues to execute instructions from the process that issued the I/O command ▲ if blocking, the next instruction the processor executes is from the OS, which will put the current process in a blocked state and schedule another process Second Year OS 1 Direct Memory Access (DMA) DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred. --------------------------------------------------------------------------------------------------------------- Techniques for Performing I/O --------------------------------------------------------------------------------------------------------------- Evolution of the I/O Function The evolutionary steps can be summarized as follows: 1. The processor directly controls a peripheral device. This is seen in simple microprocessor-controlled devices. 2. A controller or I/O module is added. The processor uses programmed I/O without interrupts. 3. The same configuration as step 2 is used, but now interrupts are employed. The processor need not spend time waiting for an I/O operation to be performed, thus increasing efficiency. 4. The I/O module is given direct control of memory via DMA. It can now move a block of data to or from memory without involving the processor, except at the beginning and end of the transfer. 5. The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O. The central processing unit (CPU) directs the I/O processor to execute an I/O program in main memory. 6. The I/O module has a local memory of its own and is, in fact, a computer in its own right. With this architecture, a large set of I/O devices can be controlled, with minimal processor involvement. A common use for such an architecture has been to control communications with interactive terminals. The I/O processor takes care of most of the tasks involved in controlling the terminals. --------------------------------------------------------------------------------------------------------------- Direct Memory Access Second Year OS 2 The following layers are involved: • Logical I/O: The logical I/O module deals with the device as a logical resource and is not concerned with the details of actually controlling the device. The logical I/O module is concerned with managing general I/O functions on behalf of user processes, • Device I/O: The requested operations and data (buffered characters, records, etc.) are converted into appropriate sequences of I/O instructions, channel commands, and controller orders. Buffering techniques may be used to improve utilization. • Scheduling and control: The actual queuing and scheduling of I/O operations occurs at this layer, as well as the control of the operations. Thus, interrupts are handled at this layer and I/ O status is collected and reported. This is the layer of software that actually interacts with the I/O module and hence the device hardware. • Directory management: At this layer, symbolic file names are converted to identifiers that either reference the file directly or indirectly through a file descriptor or index table. This layer is also concerned with user operations that affect the directory of files, such as add, delete, and reorganize. • File system: This layer deals with the logical structure of files and with the operations that can be specified by users, such as open, close, read, write. Access rights are also managed at this layer. • Physical organization: Just as virtual memory addresses must be converted into physical main memory addresses, taking into account the segmentation and paging structure, logical references to files and records must be converted to physical secondary storage addresses, taking into account the physical track and sector structure of the secondary storage device. Allocation of secondary storage space and main storage buffers is generally treated at this layer as well. • Logical I/O: Second Year OS 5 - Deals with the device as a logical resource • Device I/O: - Converts requested operations into sequence of I/O instructions • Scheduling and Control - Performs actual queuing and control operations • Directory management - Concerned with user operations affecting files • File System - Logical structure and operations • Physical organisation] - Converts logical names to physical addresses -------------------------------------------------------------------------------------------------------------- I/O Buffering • Processes must wait for I/O to complete before proceeding - To avoid deadlock certain pages must remain in main memory during I/O • It may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made. Block-oriented device stores information in blocks that are usually of fixed size, and transfers are made one block at a time. It is possible to reference data by its block number. disks and USB keys are examples Stream-oriented device transfers data in and out as a stream of bytes, with no block structure. Terminals, printers, communications ports, and most other devices that are not secondary storage are examples. --------------------------------------------------------------------------------------------------------------- Suppose that a user process wishes to read blocks of data from a disk one at a time, the data are to be read into a data area within the address space of the user process . The simplest way to execute an I/O command to the disk unit and then wait for the data to become available. The waiting could either be busy waiting (continuously test the device status) or process suspension on an interrupt. There are two problems with this approach. First, the program is hung up waiting for the relatively slow I/O to complete. The second problem is that this approach to I/O interferes with swapping decisions by the operating system. If a process issues an I/O command, is suspended awaiting the result, and then is swapped out prior to the beginning of the operation, the process is blocked waiting on the I/O event, and the I/O operation is blocked waiting for the process to be swapped in. To avoid this deadlock, the user memory involved in the I/O operation must be locked in main memory immediately before the I/O request is issued. The same considerations apply to an output operation. If a block is being transferred from a user process area directly to an I/O module, then the process is blocked during the transfer and the process may not be swapped out. To avoid these overheads and inefficiencies, It is sometimes convenient to perform input transfers in advance of requests being made and to perform output transfers sometime after the request is made. This technique is known as buffering. ---------------------------------------------------------------------------------------------------------------- No Buffer Second Year OS 6 • Without a buffer, the OS directly accesses the device when it needs Single Buffer • The simplest type of support that the operating system can provide is single buffering • When a user process issues an I/O request, the OS assigns a buffer in the system portion of main memory to the operation. • In the case of line-at-a-time I/O, the buffer can be used to hold a single line. • The user process is suspended during input, awaiting the arrival of the entire line. • For output, the user process can place a line of output in the buffer and continue processing. • It need not be suspended unless it has a second line of output to send before the buffer is emptied from the first output operation. Double Buffer ▲ Use two system buffers instead of one ▲ A process can transfer data to or from one buffer while the operating system empties or fills the other buffer ▲ known as double buffering or buffer swapping ▲ For stream-oriented input, we again are faced with the two alternative modes of operation. ▲ For line-at-a-time I/O, the user process need not be suspended for input or output, unless the process runs ahead of the double buffers. ▲ For byte-at-a-time operation, the double buffer offers no particular advantage over a single buffer of twice the length. In both cases, the producer/consumer model is followed. Circular Buffer Second Year OS 7 3. Recovery from a failure is simple. When a drive fails, the data may still be accessed from the second drive. The principal disadvantage of RAID 1 is the cost; it requires twice the disk space of the logical disk that it supports. RAID 1 is likely to be limited to drives that store system software and data and other highly critical files. In these cases, RAID 1 provides real-time backup of all data so that in the event of a disk failure, all of the critical data is still immediately available. In a transaction-oriented environment, RAID 1 can achieve high I/O request rates if the bulk of the requests are reads. In this situation, the performance of RAID 1 can approach double of that of RAID 0. However, if a I/O requests are write requests, then there may be no significant performance gain over RAID 0. RAID 1 may also provide improved performance over RAID 0 for data transfer intensive applications with a high percentage of reads. Improvement occurs if the application can split each read request so that both disk members participate. • Redundancy is achieved by the simple expedient of duplicating all the data • There is no “write penalty” • When a drive fails the data may still be accessed from the second drive • Principal disadvantage is the cost RAID Level 2 RAID levels 2 and 3 make use of a parallel access technique. In a parallel access array, all member disks participate in the execution of every I/O request. Data striping is used. In the case of RAID 2 and 3, the strips are very small, often as small as a single byte or word. With RAID 2, an error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in the corresponding bit positions on multiple parity disks. Typically, a Hamming code is used, which is able to correct single-bit errors and detect double-bit errors. Although RAID 2 requires fewer disks than RAID 1, it is still rather costly. The number of redundant disks is proportional to the log of the number of data disks. • Makes use of a parallel access technique • Data striping is used • Typically a Hamming code is used • Effective choice in an environment in which many disk errors occur Second Year OS 10 RAID Level 3 RAID 3 is organized in a similar fashion to RAID 2.The difference is that RAID 3 requires only a single redundant disk, no matter how large the disk array. RAID 3 employs parallel access, with data distributed in small strips. Instead of an error correcting code, a simple parity bit is computed for the set of individual bits in the same position on all of the data disks. Redundancy: In the event of a drive failure, the parity drive is accessed and data is reconstructed from the remaining devices. Once the failed drive is replaced, the missing data can be restored on the new drive and operation resumed. Performance: Because data are striped in very small strips, RAID 3 can achieve very high data transfer rates. Any I/O request will involve the parallel transfer of data from all of the data disks. For large transfers, the performance improvement is especially noticeable. On the other hand, only one I/O request can be executed at a time. Thus, in a transaction-oriented environment, performance suffers. • Requires only a single redundant disk, no matter how large the disk array • Employs parallel access, with data distributed in small strips • Can achieve very high data transfer rates RAID Level 4 RAID levels 4 through 6 make use of an independent access technique. In an independent access array, each member disk operates independently, so that separate I/O Second Year OS 11 requests can be satisfied in parallel. Because of this, independent access arrays are more suitable for applications that require high I/O request rates and are relatively less suited for applications that require high data transfer rates. As in the other RAID schemes, data striping is used. In the case of RAID 4 through 6, the strips are relatively large. With RAID 4, a bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk. RAID 4 involves a write penalty when an I/O write request of small size is performed. Each time that a write occurs, the array management software must update not only the user data but also the corresponding parity bits. • Makes use of an independent access technique • A bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk • Involves a write penalty when an I/O write request of small size is performed RAID Level 5 RAID 5 is organized in a similar fashion to RAID 4.The difference is that RAID 5 distributes the parity strips across all disks. A typical allocation is a round-robin scheme. For an n-disk array, the parity strip is on a different disk for the first n stripes, and the pattern then repeats. The distribution of parity strips across all drives avoids the potential I/O bottleneck of the single parity disk found in RAID 4. • Similar to RAID-4 but distributes the parity bits across all disks • Typical allocation is a round-robin scheme • Has the characteristic that the loss of any one disk does not result in data loss Second Year OS 12
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved