Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Memory Hierarchy: Understanding the Organization of Computer Storage - Prof. Jian-Guo Liu, Papers of Mathematics

An overview of the memory hierarchy in computing systems, including various storage technologies, locality of reference, and caching mechanisms. It covers topics such as random-access memory (ram), read-only memory (rom), and various types of dram and nonvolatile memories. The document also discusses the organization of memory modules and the typical bus structure connecting the cpu and memory.

Typology: Papers

Pre 2010

Uploaded on 07/30/2009

koofers-user-2m4
koofers-user-2m4 🇺🇸

10 documents

1 / 11

Toggle sidebar

Related documents


Partial preview of the text

Download Memory Hierarchy: Understanding the Organization of Computer Storage - Prof. Jian-Guo Liu and more Papers Mathematics in PDF only on Docsity! The Memory Hierarchy Oct. 3, 2002 Topics n Storage technologies and trends n Locality of reference n Caching in the memory hierarchy class12.ppt 15-213 “The course that gives CMU its Zip!” – 2 – 15-213, F’02 Random-Access Memory (RAM) Key features n RAM is packaged as a chip. n Basic storage unit is a cell (one bit per cell). n Multiple RAM chips form a memory. Static RAM (SRAM) n Each cell stores bit with a six-transistor circuit. n Retains value indefinitely, as long as it is kept powered. n Relatively insensitive to disturbances such as electrical noise. n Faster and more expensive than DRAM. Dynamic RAM (DRAM) n Each cell stores bit with a capacitor and transistor. n Value must be refreshed every 10-100 ms. n Sensitive to disturbances. n Slower and cheaper than SRAM. – 3 – 15-213, F’02 SRAM vs DRAM Summary Tran. Access per bit time Persist? Sensitive? Cost Applications SRAM 6 1X Yes No 100x cache memories DRAM 1 10X No Yes 1X Main memories, frame buffers – 4 – 15-213, F’02 Conventional DRAM Organization d x w DRAM: n dw total bits organized as d supercells of size w bits cols rows 0 1 2 3 0 1 2 3 internal row buffer 16 x 8 DRAM chip addr data supercell (2,1) 2 bits / 8 bits / memory controller (to CPU) – 5 – 15-213, F’02 Reading DRAM Supercell (2,1) Step 1(a): Row access strobe (RAS) selects row 2. cols rows RAS = 2 0 1 2 3 0 1 2 internal row buffer 16 x 8 DRAM chip 3 addr data 2 / 8 / memory controller Step 1(b): Row 2 copied from DRAM array to row buffer. – 6 – 15-213, F’02 Reading DRAM Supercell (2,1) Step 2(a): Column access strobe (CAS) selects column 1. internal buffer cols rows 0 1 2 3 0 1 2 3 internal row buffer 16 x 8 DRAM chip CAS = 1 addr data 2 / 8 / memory controller Step 2(b): Supercell (2,1) copied from buffer to data lines, and eventually back to the CPU. supercell (2,1) supercell (2,1) To CPU – 7 – 15-213, F’02 Memory Modules : supercell (i,j) 64 MB memory module consisting of eight 8Mx8 DRAMs addr (row = i, col = j) Memory controller DRAM 7 DRAM 0 031 78151623243263 394047485556 64-bit doubleword at main memory address A bits 0-7 bits 8-15 bits 16-23 bits 24-31 bits 32-39 bits 40-47 bits 48-55 bits 56-63 64-bit doubleword – 8 – 15-213, F’02 Enhanced DRAMs All enhanced DRAMs are built around the conventional DRAM core. n Fast page mode DRAM (FPM DRAM) l Access contents of row with [RAS, CAS, CAS, CAS, CAS] instead of [(RAS,CAS), (RAS,CAS), (RAS,CAS), (RAS,CAS)]. n Extended data out DRAM (EDO DRAM) l Enhanced FPM DRAM with more closely spaced CAS signals. n Synchronous DRAM (SDRAM) l Driven with rising clock edge instead of asynchronous control signals. n Double data-rate synchronous DRAM (DDR SDRAM) l Enhancement of SDRAM that uses both clock edges as control signals. n Video RAM (VRAM) l Like FPM DRAM, but output is produced by shifting row buffer l Dual ported (allows concurrent reads and writes) – 17 – 15-213, F’02 Disk Geometry Disks consist of platters, each with two surfaces. Each surface consists of concentric rings called tracks. Each track consists of sectors separated by gaps. spindle surface tracks track k sectors gaps – 18 – 15-213, F’02 Disk Geometry (Muliple-Platter View) Aligned tracks form a cylinder. surface 0 surface 1 surface 2 surface 3 surface 4 surface 5 cylinder k spindle platter 0 platter 1 platter 2 – 19 – 15-213, F’02 Disk Capacity Capacity: maximum number of bits that can be stored. n Vendors express capacity in units of gigabytes (GB), where 1 GB = 10^6. Capacity is determined by these technology factors: n Recording density (bits/in): number of bits that can be squeezed into a 1 inch segment of a track. n Track density (tracks/in): number of tracks that can be squeezed into a 1 inch radial segment. n Areal density (bits/in2): product of recording and track density. Modern disks partition tracks into disjoint subsets called recording zones n Each track in a zone has the same number of sectors, determined by the circumference of innermost track. n Each zone has a different number of sectors/track – 20 – 15-213, F’02 Computing Disk Capacity Capacity = (# bytes/sector) x (avg. # sectors/track) x (# tracks/surface) x (# surfaces/platter) x (# platters/disk) Example: n 512 bytes/sector n 300 sectors/track (on average) n 20,000 tracks/surface n 2 surfaces/platter n 5 platters/disk Capacity = 512 x 300 x 20000 x 2 x 5 = 30,720,000,000 = 30.72 GB – 21 – 15-213, F’02 Disk Operation (Single-Platter View) The disk surface spins at a fixed rotational rate spindle By moving radially, the arm can position the read/write head over any track. The read/write head is attached to the end of the arm and flies over the disk surface on a thin cushion of air. sp in d le spindle sp in d le – 22 – 15-213, F’02 Disk Operation (Multi-Platter View) arm read/write heads move in unison from cylinder to cylinder spindle – 23 – 15-213, F’02 Disk Access Time Average time to access some target sector approximated by : n Taccess = Tavg seek + Tavg rotation + Tavg transfer Seek time (Tavg seek) n Time to position heads over cylinder containing target sector. n Typical Tavg seek = 9 ms Rotational latency (Tavg rotation) n Time waiting for first bit of target sector to pass under r/w head. n Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min Transfer time (Tavg transfer) n Time to read the bits in the target sector. n Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min. – 24 – 15-213, F’02 Disk Access Time Example Given: n Rotational rate = 7,200 RPM n Average seek time = 9 ms. n Avg # sectors/track = 400. Derived: n Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms. n Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.02 ms n Taccess = 9 ms + 4 ms + 0.02 ms Important points: n Access time dominated by seek time and rotational latency. n First bit in a sector is the most expensive, the rest are free. n SRAM access time is about 4 ns/doubleword, DRAM about 60 ns l Disk is about 40,000 times slower than SRAM, l 2,500 times slower then DRAM. – 25 – 15-213, F’02 Logical Disk Blocks Modern disks present a simpler abstract view of the complex sector geometry: n The set of available sectors is modeled as a sequence of b- sized logical blocks (0, 1, 2, ...) Mapping between logical blocks and actual (physical) sectors n Maintained by hardware/firmware device called disk controller. n Converts requests for logical blocks into (surface,track,sector) triples. Allows controller to set aside spare cylinders for each zone. n Accounts for the difference in “formatted capacity” and “maximum capacity”. – 26 – 15-213, F’02 I/O Bus main memory I/O bridge bus interface ALU register file CPU chip system bus memory bus disk controller graphics adapter USB controller mousekeyboard monitor disk I/O bus Expansion slots for other devices such as network adapters. – 27 – 15-213, F’02 Reading a Disk Sector (1) main memory ALU register file CPU chip disk controller graphics adapter USB controller mousekeyboard monitor disk I/O bus bus interface CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller. – 28 – 15-213, F’02 Reading a Disk Sector (2) main memory ALU register file CPU chip disk controller graphics adapter USB controller mousekeyboard monitor disk I/O bus bus interface Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory. – 37 – 15-213, F’02 Memory Hierarchies Some fundamental and enduring properties of hardware and software: n Fast storage technologies cost more per byte and have less capacity. n The gap between CPU and main memory speed is widening. n Well-written programs tend to exhibit good locality. These fundamental properties complement each other beautifully. They suggest an approach for organizing memory and storage systems known as a memory hierarchy. – 38 – 15-213, F’02 An Example Memory Hierarchy registers on-chip L1 cache (SRAM) main memory (DRAM) local secondary storage (local disks) Larger, slower, and cheaper (per byte) storage devices remote secondary storage (distributed file systems, Web servers) Local disks hold files retrieved from disks on remote network servers. Main memory holds disk blocks retrieved from local disks. off-chip L2 cache (SRAM) L1 cache holds cache lines retrieved from the L2 cache memory. CPU registers hold words retrieved from L1 cache. L2 cache holds cache lines retrieved from main memory. L0: L1: L2: L3: L4: L5: Smaller, faster, and costlier (per byte) storage devices – 39 – 15-213, F’02 Caches Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device. Fundamental idea of a memory hierarchy: n For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1. Why do memory hierarchies work? n Programs tend to access the data at level k more often than they access the data at level k+1. n Thus, the storage at level k+1 can be slower, and thus larger and cheaper per bit. n Net effect: A large pool of memory that costs as much as the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top. – 40 – 15-213, F’02 Caching in a Memory Hierarchy 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Larger, slower, cheaper storage device at level k+1 is partitioned into blocks. Data is copied between levels in block-sized transfer units 8 9 14 3 Smaller, faster, more expensive device at level k caches a subset of the blocks from level k+1 Level k: Level k+1: 4 4 0 10 – 41 – 15-213, F’02 Request 142 General Caching Concepts Program needs object d, which is stored in some block b. Cache hit n Program finds b in the cache at level k. E.g., block 14. Cache miss n b is not at level k, so level k cache must fetch it from level k+1. E.g., block 12. n If level k cache is full, then some current block must be replaced (evicted). Which one is the “victim”? l Placement policy: where can the new block go? E.g., b mod 4 l Replacement policy: which block should be evicted? E.g., LRU 9 3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Level k: Level k+1: 14 14 * 4*12 2 0 1 2 3 Request 12 4*12 – 42 – 15-213, F’02 General Caching Concepts Types of cache misses: n Cold (compulsary) miss l Cold misses occur because the cache is empty. n Conflict miss l Most caches limit blocks at level k+1 to a small subset (sometimes a singleton) of the block positions at level k. l E.g. Block i at level k+1 must be placed in block (i mod 4) at level k+1. l Conflict misses occur when the level k cache is large enough, but multiple data objects all map to the same level k block. l E.g. Referencing blocks 0, 8, 0, 8, 0, 8, ... would miss every time. n Capacity miss l Occurs when the set of active cache blocks (working set) is larger than the cache. – 43 – 15-213, F’02 Examples of Caching in the Hierarchy Hardware0On-Chip TLBAddress translations TLB Web browser 10,000,000Local diskWeb pagesBrowser cache Web cache Network buffer cache Buffer cache Virtual Memory L2 cache L1 cache Registers Cache Type Web pages Parts of files Parts of files 4-KB page 32-byte block 32-byte block 4-byte word What Cached Web proxy server 1,000,000,000Remote server disks OS100Main memory Hardware1On-Chip L1 Hardware10Off-Chip L2 AFS/NFS client 10,000,000Local disk Hardware+ OS 100Main memory Compiler0 CPU registers Managed By Latency (cycles) Where Cached
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved