Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Notes on Course Mechanics, Course Objectives, Information | COMP 411, Study notes of Computer Architecture and Organization

Material Type: Notes; Professor: McMillan; Class: Computer Organization; Subject: COMPUTER SCIENCE; University: University of North Carolina - Chapel Hill; Term: Fall 2006;

Typology: Study notes

Pre 2010

Uploaded on 03/16/2009

koofers-user-01v
koofers-user-01v 🇺🇸

10 documents

1 / 32

Toggle sidebar

Related documents


Partial preview of the text

Download Notes on Course Mechanics, Course Objectives, Information | COMP 411 and more Study notes Computer Architecture and Organization in PDF only on Docsity! L01 - Introduction 1Comp411 – Fall 2006 8/23/2006 Welcome to Comp 411! I thought this course was called “Computer Organization” David Macaulay 1) Course Mechanics 2) Course Objectives 3) Information (A course previously known as Comp 120) L01 - Introduction 2Comp411 – Fall 2006 8/23/2006 Meet the Crew… Lectures: Leonard McMillan (SN-258) Office Hours M 2-3 TA: TBA (Find out next Monday) Book: Patterson & Hennessy Computer Organization & Design 3rd Edition, ISBN: 1-55860-604-1 (However, you won’t need it for the next couple of weeks) L01 - Introduction 5Comp411 – Fall 2006 8/23/2006 Goal 1: Demystify Computers Strangely, most people (even some computer scientists) are afraid of computers. We are only afraid of things we do not understand! I do not fear computers. I fear the lack of them. - Isaac Asimov (1920 - 1992) Fear is the main source of superstition, and one of the main sources of cruelty. To conquer fear is the beginning of wisdom. - Bertrand Russell (1872 – 1970) L01 - Introduction 6Comp411 – Fall 2006 8/23/2006 Goal 2: Power of Abstraction Define a function, develop a robust implementation, and then put a box around it. Abstraction enables us to create unfathomable machines called computers. Why do we need ABSTRACTION… Imagine a billion --- 1,000,000,000 L01 - Introduction 7Comp411 – Fall 2006 8/23/2006 The key to building systems with >1G components Personal Computer: Hardware & Software Circuit Board: ≈8 / system 1-2G devices Integrated Circuit: ≈8-16 / PCB .25M-16M devices Module: ≈8-16 / IC 100K devices Cell: ≈1K-10K / Module 16-64 devicesGate: ≈2-16 / Cell 8 devices Scheme for representing information MOSFET L01 - Introduction 10Comp411 – Fall 2006 8/23/2006 Our Plan of Attack…  Understand how things work, by alternating between low-level (bottom-up) and high level (top-down) concepts  Encapsulate our understanding using appropriate abstractions  Study organizational principles: abstractions, interfaces, APIs.  Roll up our sleeves and design at each level of hierarchy  Learn engineering tricks - history - systematic design approaches - diagnose, fix, and avoid bugs L01 - Introduction 11Comp411 – Fall 2006 8/23/2006 What is “Computation”? Computation is about “processing information” - Transforming information from one form to another - Deriving new information from old - Finding information associated with a given input - “Computation” describes the motion of information through time - “Communication” describes the motion of information through space L01 - Introduction 12Comp411 – Fall 2006 8/23/2006 What is “Information”? information, n. Knowledge communicated or received concerning a particular fact or circumstance. Information resolves uncertainty. Information is simply that which cannot be predicted. The less predictable a message is, the more information it conveys! Duke won again. Tell me something new… “ 10 Problem sets, 2 quizzes, and a final!” A Computer Scientist’s Definition: L01 - Introduction 15Comp411 – Fall 2006 8/23/2006 Quantifying Information (Claude Shannon, 1948) Suppose you’re faced with N equally probable choices, and I give you a fact that narrows it down to M choices. Then I’ve given you log2(N/M) bits of information Examples:  information in one coin flip: log2(2/1) = 1 bit  roll of a single die: log2(6/1) = ~2.6 bits  outcome of a Football game: 1 bit (well, actually, “they won” may convey more information than “they lost”…) Information is measured in bits (binary digits) = number of 0/1’s required to encode choice(s) L01 - Introduction 16Comp411 – Fall 2006 8/23/2006 Example: Sum of 2 dice 2 3 4 5 6 7 8 9 10 11 12 i2 = log2(36/1) = 5.170 bits i3 = log2(36/2) = 4.170 bits i4 = log2(36/3) = 3.585 bits i5 = log2(36/4) = 3.170 bits i6 = log2(36/5) = 2.848 bits i7 = log2(36/6) = 2.585 bits i8 = log2(36/5) = 2.848 bits i9 = log2(36/4) = 3.170 bits i10 = log2(36/3) = 3.585 bits i11 = log2(36/2) = 4.170 bits i12 = log2(36/1) = 5.170 bits The average information provided by the sum of 2 dice: bits2743ppi i i2i 12 2i M N 2N M i i .)(log)(logave =−== ∑∑ = Entropy L01 - Introduction 17Comp411 – Fall 2006 8/23/2006 Show Me the Bits! Can the sum of two dice REALLY be represented using 3.274 bits? If so, how? The fact is, the average information content is a strict *lower-bound* on how small of a representation that we can achieve. In practice, it is difficult to reach this bound. But, we can come very close. L01 - Introduction 20Comp411 – Fall 2006 8/23/2006 Huffman Coding • A simple *greedy* algorithm for approximating an entropy efficient encoding 1. Find the 2 items with the smallest probabilities 2. Join them into a new *meta* item whose probability is the sum 3. Remove the two items and insert the new meta item 4. Repeat from step 1 until there is only one item 36/36 11 2/36 3 2/36 4/36 4 3/36 7/36 9 4/36 5 4/36 8/36 15/36 12 1/36 2 1/36 2/36 7 6/36 11/36 8 5/36 6 5/36 10/36 21/36 10 3/36 5/36 Huffman decoding tree L01 - Introduction 21Comp411 – Fall 2006 8/23/2006 Converting Tree to Encoding 36/36 4 3/36 11 2/36 3 2/36 4/36 7/36 9 4/36 5 4/36 8/36 15/36 7 6/36 10 3/36 12 1/36 2 1/36 2/36 5/36 11/36 8 5/36 6 5/36 10/36 21/36 0 0 0 0 00 0 0 0 0 1 1 1 1 1 1 1 1 1 1 Huffman decoding tree Once the *tree* is constructed, label its edges consistently and follow the paths from the largest *meta* item to each of the real item to find the encoding. 2 - 10011 3 - 0101 4 - 011 5 - 001 6 - 111 7 - 101 8 - 110 9 - 000 10 - 1000 11 - 0100 12 - 10010 L01 - Introduction 22Comp411 – Fall 2006 8/23/2006 Encoding Efficiency How does this encoding strategy compare to the information content of the roll? Pretty close. Recall that the lower bound was 3.274 bits. However, an efficient encoding (as defined by having an average code size close to the information content) is not always what we want! 3063b 54433 333345b ave 36 1 36 2 36 3 36 4 36 5 36 6 36 5 36 4 36 3 36 2 36 1 ave .= +++++ +++++= L01 - Introduction 25Comp411 – Fall 2006 8/23/2006 Property 1: Parity The sum of the bits in each symbol is even. (this is how errors are detected) 2-1111000 = 1 + 1 + 1+ 1 + 0 + 0 + 0 = 4 3-1111101 = 1 + 1 + 1 + 1 + 1 + 0 + 1 = 6 4-0011 = 0 + 0 + 1 + 1 = 2 5-0101 = 0 + 1 + 0 + 1 = 2 6-0110 = 0 + 1 + 1 + 0 = 2 7-0000 = 0 + 0 + 0 + 0 = 0 8-1001 = 1 + 0 + 0 + 1 = 2 9-1010 = 1 + 0 + 1 + 0 = 2 10-1100 = 1 + 1 + 0 + 0 = 2 11-1111110 = 1 + 1 + 1 + 1 + 1 + 1 + 0 = 6 12-1111011 = 1 + 1 + 1 + 1 + 0 + 1 + 1 = 6 How much information is in the last bit? L01 - Introduction 26Comp411 – Fall 2006 8/23/2006 Property 2: Separation Each encoding differs from all others by at least two bits in their overlapping parts 3 4 5 6 7 8 9 10 11 12 1111101 0011 0101 0110 0000 1001 1010 1100 1111110 1111011 2 1111000 1111x0x xx11 x1x1 x11x xxxx 1xx1 1x1x 11xx 1111xx0 11110xx 3 1111101 xx11 x1x1 x11x xxxx 1xx1 1x1x 11xx 11111xx 1111xx1 4 0011 0xx1 0x1x 00xx x0x1 x01x xxxx xx11 xx11 5 0101 01xx 0x0x xx01 xxxx x10x x1x1 x1x1 6 0110 0xx0 xxxx xx10 x1x0 x11x x11x 7 0000 x00x x0x0 xx00 xxxx xxxx 8 1001 10xx 1x0x 1xx1 1xx1 9 1010 1xx0 1x1x 1x1x 10 1100 11xx 11xx 11 1111110 1111x1xThis difference is called the “Hamming distance” “A Hamming distance of 1 is needed to uniquely identify an encoding” L01 - Introduction 27Comp411 – Fall 2006 8/23/2006 A Short Detour It is illuminating to consider Hamming distances geometrically. Given 2 bits the largest Hamming distance that we can achieve between 2 encodings is 2. This allows us to detect 1-bit errors if we encode 1 bit of information using 2 bits. With 3 bits we can find 4 encodings with a Hamming distance of 2, allowing the detection of 1-bit errors when 3 bits are used to encode 2 bits of information. We can also identify 2 encodings with Hamming distance 3. This extra distance allows us to detect 2-bit errors. However, we could use this extra separation differently. 00 10 01 11 000 001 010 100 011 111 101 110 Encodings separated by a Hamming distance of 2. 000 001 010 100 011 111 101 110 Encodings separated by a Hamming distance of 3. L01 - Introduction 30Comp411 – Fall 2006 8/23/2006 (He wouldn’t try 5D!) It takes 5 bits before we can find more that 2 encoding separated by a Hamming distance of 3. Shown on the right are four 5-bit encodings {00000, 01111, 10101, 11010} separated by a Hamming distance of at least 3. With it we can correct any 1-bit error,and we can detect some (but not all) 2-bit errors. We’ll stop here, because we really need a computer to go much beyond 5D, and we’ll need to build one first! 10000 10001 10011 10010 10100 10101 10111 10110 11100 01101 11111 11110 11000 11001 11011 11010 00000 00001 00011 00010 00100 00101 00111 00110 01100 01101 01111 01110 01000 01001 01011 01010 L01 - Introduction 31Comp411 – Fall 2006 8/23/2006 An alternate error correcting code We can generalize the notion of parity in order to construct error correcting codes. Instead of computing a single parity bit for an entire encoding we can allow multiple parity bits over different subsets of bits. Consider the following technique for encoding 25 bits. This approach is easy to implement, but it is not optimal in terms of the number of bits used. 0 1 1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 1 0 0 1 1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1-bit errors will cause both a row and column parity error, uniquely identifying the errant bit for correction. An extra parity bit allows us to detect 1-bit errors in the parity bits. Many 2-bit errors can also be corrected. However, 2-bit errors involving the same row or column are undetectable. 10 1 10 1 0 1 L01 - Introduction 32Comp411 – Fall 2006 8/23/2006 Summary Information resolves uncertainty • Choices equally probable: • N choices down to M → log2(N/M) bits of information •Choices not equally probable: • choicei with probability pi → log2(1/pi) bits of information • average number of bits = Σpilog2(1/pi) • use variable-length encodings Next time: • Technology and Abstraction What makes a computer tick…
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved