Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

I/O Management and Disk Scheduling - Lecture Notes | CSE 410, Study notes of Operating Systems

Material Type: Notes; Class: Operating Systems; Subject: Computer Science & Engineering; University: Michigan State University; Term: Spring 2004;

Typology: Study notes

Pre 2010

Uploaded on 07/28/2009

koofers-user-nml
koofers-user-nml 🇺🇸

10 documents

1 / 65

Toggle sidebar

Related documents


Partial preview of the text

Download I/O Management and Disk Scheduling - Lecture Notes | CSE 410 and more Study notes Operating Systems in PDF only on Docsity! Operating Systems CSE 410, Spring 2004 I/O Management and Disk Scheduling Stephen Wagner Michigan State University Categories of 1/O Devices e Human readable e Machine readable e Communications Communication |/O Devices e Networks e Digital Line Drivers e Modems Differences in I/O Devices • Data Rate • Application • Complexity of Control Data Rate Differences • Different I/O devices will have different data rates. • There may be several orders of magnitude difference between the data rates • The data rate affects how the OS communicates with the I/O Complexity Differences • Unit of Transfer – Data may be transferred as a stream of bytes for a terminal or in larger blocks for a disk • Data representation • Error conditions Techniques for Performing I/O • Programmed I/O – Processor busy waits for the I/O to complete – Even the fastest I/O devices are slow compared to the processor • Interrupt-driver I/O – I/O command is issued – Processor continues executing another process – I/O module sends an interrupt when it is done Direct Memory Access (DMA) • DMA module controls exchange of data between main memory and I/O device • Processor interrupted only after entire block has been transferred • Processor does not have to be involved in moving every piece of data Evolution of I/O • I/O module is a separate processor • I/O processor – I/O module has its own memory – Its a computer in its own right • Local disk → NetApp Details of DMA • Processor sends the following information to the DMA – Whether a read or write is requested – The address of the I/O device involved – The starting location in memory to read or write from (physical address) – The number of words to be read or written Details of DMA Address Register Control Logic Data Register Figure 11.2 Typical DMA Block Diagram Data Count Data Lines Address Lines DMA Request DMA Acknowledge Interrupt Read Write DMA: Cycle Stealing • DMA takes control of the system from the CPU to transfer data to and from memory over the system bus • A cycle is “stolen” from the CPU. The CPU does not have to do any work, but it cannot use the bus while the DMA is using it • The instruction cycle is suspended so data can be transferred • The CPU pauses one bus cycle • No interrupts occur Processor Cycle Fetch Instruction Processor Cycle Decode Instruction Processor Cycle Instruction Cycle Time DMA Breakpoints Figure 11.3 DMA and Interrupt Breakpoints During an Instruction Cycle Interrupt Breakpoint Fetch Operand Processor Cycle Execute Instruction Processor Cycle Store Result Processor Cycle Process Interrupt DMA: Cycle Stealing • Cycle stealing causes the CPU to execute more slowly • Number of required busy cycles can be cut by integrating the DMA and I/O functions • Hardware can provide a path between DMA module and I/O module that does not include the system bus • DMA can talk to I/O without interfering with processor Operating System Design Issues • Efficiency – Most I/O devices are extremely slow compared to main memory and the CPU – Use of multiprogramming allows for some processes to be waiting on I/O while another process executes – I/O cannot keep up with processor speed – Swapping is used to bring in additional processes Operating System Design Issues • Generality – Desirable to handle all I/O devices in a uniform manner – Hide most of the details of device I/O in lower level routines so that processes see devices in general terms such as read, write, open, close, lock, unlock User Processes Communication Architecture Device I/O Scheduling & Control (b) Communications port Figure 11.5 A Model of I/O Organization Hardware User Processes Logical I/O Device I/O Scheduling & Control (a) Local peripheral device Hardware User Processes Directory Management File System Physical Organization Device I/O Scheduling & Control (c) File system Hardware I/O Buffering • Operating System assigns a buffer in main memory for an I/O request • This allows the OS to evict the pages of a process waiting on I/O • Block-Oriented – Input transfers made to buffer – Block moved to user space when needed – Another block is moved into the buffer Operating System I/O Device In (a) No buffering User Process Operating System I/O Device In Move (b) Single buffering User Process Operating System I/O Device In Move (c) Double buffering User Process Operating System I/O Device In Move (d) Circular buffering User Process • • Figure 11.6 I/O Buffering Schemes (input) Double Buffering • With a single buffer the user process must copy the data from the buffer before the I/O reads the next block into the buffer • This may prevent us from efficiently using the processor and I/O • Solution is to use two buffers • A process can transfer data to one buffer while the operating system empty or fills another buffer Disks S6 S4 S5 S3 S2 S1 SN • • • S6 • • • S5 S4 S3 S2 S1 SN Inter-sector gap Inter-track gap Figure 11.17 Disk Data Layout Sectors Tracks Disk Performance Parameters • To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector • Seek time – The time it takes to position the head at the desired track • Rotational delay or latency – The time it takes for the beginning of the sector to reach the head. Timing of Disk I/O Transfer Wait for Device Wait for Channel Seek Rotational Delay Data Transfer Device Busy Figure 11.7 Timing of a Disk I/O Transfer Timing Comparison • A file occupies all 320 sectors on 8 adjacent tracks Sequential Access Random Access – Avg. seek: 10ms – Rot. Delay: 3ms – Read 1st Track: 6ms – Read next track: 3+6=9ms – Total Time = 19+7*9=82ms – Avg. seek: 10ms – Rot. Delay: 3ms – Read One Sector: .01875ms – Total Time=8*320*.01875ms = 33,328 ms Disk Scheduling Policies • FIFO • Process requests sequentially • Fair to all processes • Approaches random scheduling in performance if there are many processes Disk Scheduling Policies • Priority – Goal is not to optimize disk usage but to meet other objectives – Short batch jobs may have higher priority – Provide good interactive response time Disk Scheduling and Elevators • Moving a disk head is kind of like moving an elevator • Which floor should an elevator head to next? • In FIFO, you would go to your floor, not stopping to pick people up or let them off on the way. • In SST, the elevator may bounce around the middle floors, and only rarely make it to the top or bottom Disk Scheduling Policies • SCAN ( Elevator Algorithm) – Head moves in one direction only, server all requests it encounters in the order encountered – When it reaches the last track, or the last track with any requests on it, it reverses direction Disk Scheduling Policies • C SCAN ( Elevator Algorithm) – Head moves in one direction only, server all requests it encounters in the order encountered – When it reaches the last track, or the last track with any requests on it, it returns to the first track. – It only serves requests while going in the “forward” direction Name Description Remarks Selection according to requestor RSS Random scheduling For analysis and simulation FIFO First in first out Fairest of them all PRI Priority by process Control outside of disk queue management LIFO Last in first out Maximize locality and. resource utilization Selection according to requested item SSTF Shortest service time first High utilization, small queues SCAN Back and forth over disk Better service distribution C-SCAN One way with fast return Lower service variability N-step-SCAN SCAN of Nrecords atatime Service guarantee FSCAN N-step-SCAN with V = queue Load sensitive size at beginning of SCAN cycle RAID • RAID: Redundant Array of Independent (Inexpensive) Disks • Improvement rat in secondary storage is slow • Uses multiple disks to improve efficiency and reliability RAID 0 strip 12 (a) RAID 0 (non-redundant) strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 strip 15 strip 11 strip 7 strip 3 strip 12 (b) RAID 1 (mirrored) strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 strip 15 strip 11 strip 7 strip 3 strip 12 strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 (c) RAID 2 (redundancy through Hamming code) b0 b1 b2 b3 f0(b) f1(b) f2(b) strip 15 strip 11 strip 7 strip 3 Figure 11.9 RAID Levels (page 1 of 2) RAID 2 strip 12 (a) RAID 0 (non-redundant) strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 strip 15 strip 11 strip 7 strip 3 strip 12 (b) RAID 1 (mirrored) strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 strip 15 strip 11 strip 7 strip 3 strip 12 strip 8 strip 4 strip 0 strip 13 strip 9 strip 5 strip 1 strip 14 strip 10 strip 6 strip 2 (c) RAID 2 (redundancy through Hamming code) b0 b1 b2 b3 f0(b) f1(b) f2(b) strip 15 strip 11 strip 7 strip 3 Figure 11.9 RAID Levels (page 1 of 2) RAID 3 block 12 (e) RAID 4 (block-level parity) block 8 block 4 block 0 block 13 block 9 block 5 block 1 block 14 block 10 block 6 block 2 block 15 block 7 block 3 P(12-15) P(8-11) P(4-7) P(0-3) block 12 block 8 block 4 block 0 block 9 block 5 block 1 block 13 block 6 block 2 block 14 block 10 block 3 block 15 P(16-19) P(12-15) P(8-11) P(4-7) block 16 block 17 block 18 block 19 block 11 block 7 (f) RAID 5 (block-level distributed parity) (d) RAID 3 (bit-interleaved parity) b0 b1 b2 b3 P(b) Figure 11.9 RAID Levels (page 2 of 2) P(0-3) block 11 block 12 (g) RAID 6 (dual redundancy) block 8 block 4 block 0 P(12-15) block 9 block 5 block 1 Q(12-15) P(8-11) block 6 block 2 block 13 P(4-7) block 3 block 14 block 10 Q(4-7) P(0-3) Q(8-11) block 15 block 7 Q(0-3) block 11 RAID 4 block 12 (e) RAID 4 (block-level parity) block 8 block 4 block 0 block 13 block 9 block 5 block 1 block 14 block 10 block 6 block 2 block 15 block 7 block 3 P(12-15) P(8-11) P(4-7) P(0-3) block 12 block 8 block 4 block 0 block 9 block 5 block 1 block 13 block 6 block 2 block 14 block 10 block 3 block 15 P(16-19) P(12-15) P(8-11) P(4-7) block 16 block 17 block 18 block 19 block 11 block 7 (f) RAID 5 (block-level distributed parity) (d) RAID 3 (bit-interleaved parity) b0 b1 b2 b3 P(b) Figure 11.9 RAID Levels (page 2 of 2) P(0-3) block 11 block 12 (g) RAID 6 (dual redundancy) block 8 block 4 block 0 P(12-15) block 9 block 5 block 1 Q(12-15) P(8-11) block 6 block 2 block 13 P(4-7) block 3 block 14 block 10 Q(4-7) P(0-3) Q(8-11) block 15 block 7 Q(0-3) block 11 Disk Cache • Buffer in main memory used to store disk sectors • Contains a copy of some of the sectors that are stored on disk • Similar to virtual memory. Reads and writes to memory are much faster than writes to disk. • Not all sectors can be stored in memory, so we sometimes have to evict sectors. Replacement Policies for Disk Caches • Least Recently Used • Least Frequently Used UNIX SRV4 I/O Character Block Buffer Cache File Subsystem Figure 11.14 UNIX I/O Structure Device Drivers
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved