Download I/O Management in Operating Systems: Disk Scheduling, Buffering, and RAID and more Slides Computer Science in PDF only on Docsity! Operating Systems Lecture 25: I/O Management 1 Docsity.com Roadmap – I/O Devices - 11.1 – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid – Disk Cache 2 Docsity.com Reading a Disk Sector: Step 1 main memory ALU register file CPU chip disk controller graphics adapter USB controller mouse keyboard monitor disk I/O bus bus interface CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller. 5 Docsity.com Reading a Disk Sector: Step 2 main memory ALU register file CPU chip disk controller graphics adapter USB controller mouse keyboard monitor disk I/O bus bus interface Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory. 6 Docsity.com Reading a Disk Sector: Step 3 main memory ALU register file CPU chip disk controller graphics adapter USB controller mouse keyboard monitor disk I/O bus bus interface When the DMA transfer completes, the disk controller notifies the CPU with an interrupt (i.e., asserts a special “interrupt” pin on the CPU) 7 Docsity.com Disk Operation (Single-Platter View) The disk surface spins at a fixed rotational rate spindle By moving radially, the arm can position the read/write head over any track. The read/write head is attached to the end of the arm and flies over the disk surface on a thin cushion of air. spindle spindle sp in dl e 10 Docsity.com Disk Operation (Multi-Platter View) arm read/write heads move in unison from cylinder to cylinder spindle 11 Docsity.com Disk Performance Parameters • The actual details of disk I/O operation depend on many things – A general timing diagram of disk I/O transfer is shown here. 12 Docsity.com Disk Access Time Example • Given: – Rotational rate = 7,200 RPM – Average seek time = 9 ms – Avg # sectors/track = 400 • Derive the average time to access some target sector – Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms – Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.02 ms – Taccess = 9 ms + 4 ms + 0.02 ms 15 Docsity.com Disk Scheduling Policies • Access time dominated by seek time and rotational latency – If sector access requests involves selection of tracks at random, the performance of the disk is is poor!!! • To compare various disk scheduling policies, consider a disk head is initially located at track 100. – Assume a disk with 200 tracks and that the disk request queue has random requests in it. • The requested tracks, in the order received by the disk scheduler, are – 55, 58, 39, 18, 90, 160, 150, 38, 184. 16 Docsity.com First-in, first-out (FIFO) • Process request sequentially • Fair to all processes • Approaches random scheduling in performance if there are many processes 100 -> 55, 58, 39, 18, 90, 160, 150, 38, 184 17 Docsity.com Comparison of
Disk Scheduling Algorithms
(a) FIFO (b) SSTF {c) SCAN (d) C-SCAN
(starting at tack 100) (starting at tack 100) (starting at track 100. in the (starting at track 100. im the
direction of increasing tack direction of increasing track
number) number}
Next track Number of Next track Number of Next track Number of Next track Number of
accessed tracks accessed tracks accessed tracks accessed tracks
traversed traversed traversed traversed
55 45 90 16 150 50 150 50
58 3 58 32 160 10 160 10
39 19 35 3 184 24 184 24
18 21 39 16 30 94 18 166
30 72 38 1 58 32 38 20
160 70 18 20 55 3 39 1
150 10 150 132 39 16 55 16
38 112 160 16 38 1 38 3
184 146 184 24 18 20 30 32
Average seek 33.3 Average seek 27.5 Average seek 27.8 Average seek 35.8
length length length
S
20
Docsity.com
Disk Scheduling
Algorithms
Table 11.3 Disk Scheduling Algorithms
Name Description Remarks
Selection according to requestor
RSS Random scheduling For analysis and simulation
FIFO First in first out Fairest of them all
PRI Priority by process Control outside of disk queue management
LIFO Last in first out Maximize locality and resource utilization
Selection according to requested item
SSTF Shortest service time first High utilization, small queues
SCAN Back and forth over disk Better service distribution
C-SCAN One way with fast return Lower service variability
Ne-step-SCAN | SCAN of N records at a time Service guarantee
FSCAN N-step-SCAN with V = queue size at Load sensitive
beginning of SCAN cycle
& .
Docsity.com
Roadmap – I/O Devices – Organization of the I/O Function – Operating System Design Issues – I/O Buffering – Disk Scheduling – Raid - 11.6 22 Docsity.com RAID 0 - Stripped • Not a true RAID – no redundancy • Disk failure is catastrophic • Very fast due to parallel read/write 25 Docsity.com RAID 1 - Mirrored • Redundancy through duplication instead of parity. • Read requests can made in parallel. • Simple recovery from disk failure 26 Docsity.com RAID 4 Block-level parity • A bit-by-bit parity strip is calculated across corresponding strips on each data disk • The parity bits are stored in the corresponding strip on the parity disk. 27 Docsity.com Next Lecture • From Chapter 11 we covered only 11.1, 11.5, and 11.6 • Next Lecture we start Chapter 12 on File Management 30 Docsity.com RAID 2 (Using Hamming code) • Synchronised disk rotation • Data stripping is used (extremely small) • Hamming code used to correct single bit errors and detect double-bit errors 31 Docsity.com RAID 3 bit-interleaved parity • Similar to RAID-2 but uses all parity bits stored on a single drive 32 Docsity.com Least Frequently Used • The block that has experienced the fewest references is replaced • A counter is associated with each block • Counter is incremented each time block accessed • When replacement is required, the block with the smallest count is selected. 35 Docsity.com Frequency-Based
Replacement
New Section Old Section
~
MRU ere
LRU
Re-reference:
count unchanged Re-reference;
count := count + 1
Miss (new block brought in)
count := 1
(a) FIFO
New Section Middle Section Old Section
MRU eee eo @ eee
(b) Use of three sections
LRU
36
Docsity.com
LRU Disk Cache
Performance
50 «
&
2 40
= EMV
E
3 30
4
g a rus
20 * TS.
, il ae 1BM MVS +0
10 vax UNIX
IBM SVs 10
° a
T T T T T T T
‘ ib i 2 i 4b : Y Pr) uUFDUSCU!™C~«~a‘;Cté‘“W
Cache size amegabyces
Cache size (megabytes) megabytes)
Figure 11.11 Disk Cache Performance Using l‘requency-Based Replacement [ROBI90)
Figure 11.10 Some Disk Cache Performance Results Using LRU
37
Docsity.com