Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Hard Disk Technology: Understanding Disk Tracks, Sectors, and RAID Levels, Study notes of Computer Architecture and Organization

An in-depth exploration of hard disk technology, covering topics such as glass improvements, read and write mechanisms, data organization, constant angular velocity (cav), multiple zone recording, disk track format, disk performance parameters, and raid levels. Learn about the structure of hard disks, the role of glass in enhancing their performance, and the various techniques used to increase storage capacity and improve data access.

Typology: Study notes

2010/2011

Uploaded on 09/02/2011

hamit1990
hamit1990 🇮🇳

4.3

(68)

97 documents

1 / 25

Toggle sidebar

Related documents


Partial preview of the text

Download Hard Disk Technology: Understanding Disk Tracks, Sectors, and RAID Levels and more Study notes Computer Architecture and Organization in PDF only on Docsity! Device Subsystems Magnetic Disk • A disk is a circular platter constructed of a non- magnetic material called the substrate (aluminum), coated with a magnetizable material (iron oxide… rust) • Now glass —Improved surface uniformity – Increases reliability —Reduction in surface defects – Reduced read/write errors —greater ability to withstand shock and damage —Better stiffness —Better shock/damage resistance Constant angular velocity (CAV) • There is a variation in speed of reading a bit near the center of a disk and a bit on the outside. This can be avoided by increasing the space between bits of information recorded in segments of the disk. • This info can be scanned at the same rate by rotating the disk at a fixed speed (CAV) • Advantage: – individual blocks of data can be directly addressed by track of sector. – The head can be moved to a specific address faster • Disadvantage: the amount of data that can be stored on the long outer tracks is the same as on the short inner tracks.=> density is reduced Multiple zone recording • To increase density => multiple zone recording – surface is divided into number of zones. • Zone – number of bits/track is constant • Zones far from the center contain more bits than zones closer to the center • Storage capacity increases with complex circuitry. CAV Multiple Zone recording Disk Track Format • ID – to locate a particular sector • Synch – delimits the beginning of the field • Track – identifies a track on the surface • Head – identifies a head • ID and data fields have error detecting codes (CRC) Disk performance parameters • Seek time: on a movable head, time it takes to position the head at the track • Rotational delay/latency: the time it takes for the beginning of the sector to reach the head • Access time: seek time + latency – Time it takes to get into position to read or write • Transfer time: time required to transfer data T = b/r*N, where b is the number of bytes to be transferred, r is the rotation speed and N is the number of bytes on a track Average access time Ta = Ts + 1/2r + b/rN, where Ts is average seek time RAID (redundant array of independent disks) • Redundant array of inexpensive disks • Multiple disk database design • Not a hierarchy • 7 levels (6 levels in common use) • Set of physical disk drives viewed by the OS as a single logical drive • Data are distributed across the physical drives of an array • Redundant disk capacity is used to store parity information => data recoverability • Improve access time and improve reliability RAID Level 0 • Not a true member of the RAID family - does not include redundancy to improve performance. • User and system data distributed across all disks in the array in strips. • Imagine a large logical disk containing ALL data. This is divided into strips (physical blocks or sectors) that are mapped ‘round robin’ to the strips in the array. • A set of logically consecutive strips that maps exactly one strip to each array member is referred to as a stripe. • + If two different I/O requests are pending for two different blocks of data – then there is a good chance that the data will be on different disks and can be serviced in parallel. • + If a single I/O request is for multiple logically continuous strips – up to n strips can be handled in parallel. RAID Level 1 (Mirroring strip & strip strip 10 strip 14 strip U1 strip 12 strip 13 strip 15 by by by bs tui Tibi 1 0 1 0 l 0 1 i" 1 ! 1 ! i) U i) I L, I L ' L ! L, I wee See ~— -—- ~~ --' ve --" 7 -—= BADD 2 (redundaney through Hamming code RAID Level 2 • Utilizes parallel access techniques - All disks participate in the execution of every I/O request. • Spindles of individual drives are synchronized so that each disk head is in the same position on each disk at any given time. • Data striping – very small strips (single byte or word). • Error correcting code is calculated across corresponding bits on each disk, and the code bits are stored in corresponding bit positions on multiple parity disks. • For Hamming Code – number of parity (redundant) disks is proportionate to the log of the number of data disks. • On a single read, all disks are simultaneously accessed. The requested data and the associated error correcting code are delivered to the array controller. Array controller can detect and fix single bit errors. • For write – all disks must be accessed. • Good choice – only for an environment in which many errors occur – therefore not used much (given high reliability if individual disks and disk drives). RAID Level 3 Similar to RAID 2 – parallel access with data distributed in small strips. Requires only a single redundant disk because it uses a single parity bit for the set of individual bits in the same position on all of the data disks. If drives X0-X3 contain data, and X4 contains parity bits. X4(i) = X3(i)  X2(i)  X1(i)  X0(i) Redundancy – in the case of disk failure, the data can be reconstructed. If drive X1 fails – it can be reconstructed as: X1(i) = X4(i)  X3(i)  X2(i)  X0(i) Performance – can achieve high transfer rates, but only one I/O request can be executed at one time. (Better for large data transfers in non transaction- oriented environments). RAID Level 6 • Two different parity calculations are carried out and stored in separate blocks on different disks. – Example: XOR and an independent data check algorithm => makes it possible to regenerate data even if two disks containing user data fail. • No. of disks required = N + 2 (where N = number of disks required for data). • Provides HIGH data availability. • Incurs substantial write penalty as each write affects two parity blocks. Comparison of RAID Levels Category Level Description I/O request rate (Read/Write) Data Transfer Rate (Read/Write) Typical application Stripping 0 Nonredundant large strips: excellent small strips: excellent applications requiring high performance for non-critical data Mirroring 1 Mirrored Good/Fair Fair/fair System drives; critical files Parallel access 2 redundant via hamming code poor excellent   3 Bit-Interleaved parity poor excellent large I/O request size applications, such as imaging, CAD independent access 4 Block-interleaved parity excellent/fair fair/poor   5 Block-interleaved distributed parity excellent/fair fair/poor high request rate, read intensive, data lookup 6 block-interleaved dual distributed parity excellent/poor fair/poor applications requiring extreamly high availability Comparison of RAID Levels Disk failures tolerated, check space overhead for 8 data disks Pros Cons 0 failures, 0 check disks No space overhead No protection 1 failure, 8 check disks No parity calculation; fast recovery; small writes faster than higher RAIDs; fast reads Highest check storage overhead 1 failure, 4 check disks Doesn't rely on failed disk to self-diagnose ~ Log 2 check storage overhead 1 failure, 1 check disk Low check overhead; high bandwidth for large reads or writes No support for small, random reads or writes 1 failure, 1 check disk Low check overhead; more bandwidth for small reads Parity disk is small write bottleneck 1 failure, 1 check disk Low check overhead; more bandwidth for small reads and writes Small writes — 4 disk accesses RAID level 0 Nonredundant striped 1 Mirrored 2 Memory-style ECC 3 Bit-interleaved parity 4 Block-interleaved parity 5 Block-interleaved distributed parity 6 Row-diagonal parity, EVEN-ODD 2 failures, 2 check disks Protects against 2 disk failures Small writes > 6 disk accesses; 2X check overhead
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved