Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Operating Systems: Lecture 20 - Disk Structure, Addressing, and Scheduling, Study notes of Computer Science

A lecture note from a university course on operating systems, specifically focusing on the topic of storage. The lecture covers the need for storage, the structure of disks, disk addressing, disk scheduling, and various types of secondary storage. It also discusses the history of storage and the future of technologies such as solid-state drives and holographic storage.

Typology: Study notes

Pre 2010

Uploaded on 09/24/2009

koofers-user-z94
koofers-user-z94 🇺🇸

10 documents

1 / 34

Toggle sidebar

Related documents


Partial preview of the text

Download Operating Systems: Lecture 20 - Disk Structure, Addressing, and Scheduling and more Study notes Computer Science in PDF only on Docsity! Operating Systems CMPSC 473 Storage April 3, 2008 - Lecture 20 1 Outline • Disk structure: physical and logical • Disk addressing • Disk scheduling • Management 2 Past & Present in Storage 1956: IBM 305 RAMAC - 5 MB capacity (50 disks, each 24” in diameter) 2008: Seagate Savvio 15K - 73.4 GB capacity, 2.5” diameter - can read/write complete works of Shakespeare 15 times per second 5 Storage Hierarchy L1 cache L2 cache main memory secondary storage registers expensive and fast cheap and slow tertiary storage 6 Secondary Storage • Generally, magnetic disks provide the bulk of secondary storage in systems – future alternative: solid-state drives? • e.g. MacBook Air – MEMS and NEMS(nanotech) – holographic storage • data read from intersecting laser beams www.inphase-technologies.com 7 Disk Operation • Platters start moving from rest (spinup time) – lots of mass to start moving • Heads find the right track (seek time) – arm powered by actuator motor, accelerates and coasts, slows down and settles on correct track (servo-guided) • Disk rotates until correct sector found (rotational latency) – contingent on platter diameter and RPM (Savvio 15K rotates 300 times/second) • Have to stop the platters (spindown time) 10 Addressing Disks • Old days: CHS (cylinder-head-sector) – supply physical characteristics of the disk to the operating system – it specifies exactly where on the physical disk to read and write data • Nowadays: cylinders not uniform – can store more data on outer tracks than inner tracks (zoned bit recording) • why? – function of constant angular velocity (CAV) vs constant linear velocity (CLV) found in CD-ROM 11 Logical Block Addressing (LBA) • OS sees drive as an array of blocks – first block LBA = 0, next block LBA = 1 etc. • disk firmware takes care of managing the physical location of data • Block: smallest unit of data accessible through the OS – can be the size of a sector (512 bytes) up to the size of a page ( often 4 KB): defined by kernel 12 First-come First-served (FCFS) • Service the requests in order of arrival • Head movement of 640 cylinders 15 Shortest Seek Time First (SSTF) • Min. seek time from head position (like SJF) • Head movement of 236 cylinders 16 SCAN (Elevator) Algorithm • Arm moves from one end of disk to the other then reverses (like an elevator) • Head movement of 208 cylinders 17 Choosing a Disk Scheduling Algorithm • SSTF: increased performance over FCFS • SCAN, C-SCAN: good for heavy loads – less chance of starvation • C-LOOK: good overall • File allocation plays a role – contiguous allocation limits head movement • Note: only considering seek time – rotational latency also important but hard for OS to know (doesn’t have physical drive characteristics) – drive controllers implement some queueing and request coalescing 20 Why not have drive controller do all the scheduling? • Would be more efficient, but... • OS knows about constraints that the disk doesn’t – demand paging > application I/O – write > read if cache is almost full – guaranteeing write ordering (e.g. journaling, data flushing) 21 Aside: Linux I/O Schedulers • Linus Elevator (default in 2.4 kernel) – merges adjacent requests and sorts request queue – can lead to starvation in some cases though: big push to change for 2.6 kernel • Deadline I/O Scheduler – merges & sorts request + expiration timer – multiple queues to minimize seeks while ensuring request don’t starve • Anticipatory I/O Scheduler – waits a few ms after a read request to see if another one is made (high probability); acts like deadline scheduler otherwise – loses time if wrong but big win if right 22 22 Low-Level (Physical) Formatting • divide disk into sectors for disk controller to read and write – sector numbers, error-correcting codes (ECC), other identifying information (e.g., servo control data) written to each sector • usually only done at factory – can restore factory configuration (reinitialize) 25 25 High-Level (Logical) Formatting • Before formatting, OS needs to partition the disk into 1 or more cylinder groups – why more than 1? root vs swap partitions, dual boot, etc. • write a file system onto the disk – structures such as file allocation table (FAT - DOS) or inodes (UNIX) • write the boot block (boot sector) 26 26 Boot Process • Bootstrapping starts from a process in ROM • Boot loader reads a bootstrap program from the bootblock – on PCs: Master boot record (MBR): first sector on disk (446 bytes, then 64 byte partition table) • Second-stage boot loader: program whose location is pointed to from MBR – NTLDR on Windows, LILO/GRUB on Linux • choose the partition to boot from to start to OS 27 27 Linux Swap Structures H—————————- Swap area page — slot H swap partition or swap file swap map 30 Attaching Disks to Networks • NAS: network attached storage - RPCs between host and storage – e.g., NFS (what we use), iSCSI • SAN: storage area network – multiple connected storage arrays, servers connect directly to SAN • Becoming more like each other – e.g., Open Storage Networking proposal (from NetApp) combines elements of each 31 31 SCSI vs IDE/ATA • Originally speed but with serial ATA (SATA) interface speeds have caught up • SCSI supports more drives on a bus but SATA can be beneficial for small numbers • Why pay more for SCSI? Disks manufactured differently – assumed to be server (enterprise) vs personal • often faster (e.g., 15K disks usually only SCSI) • SCSI drives better constructed (O-ring sealing, air flow, more rigidity); stronger actuator motors; more reliable • ATA cheap though: 1 TB SATA < 73 GB SCSI 32 32
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved