Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Coding Theory, Lecture notes of Linear Algebra

A set of lecture notes from a course on Introduction to Coding Theory taught by Venkatesan Guruswami at Carnegie Mellon University in Spring 2010. The notes cover the origins of error-correcting codes and information theory, including Shannon's noisy coding theorem and the noiseless coding theorem. The notes also introduce error-correcting codes from a combinatorial/geometric viewpoint, focusing on aspects such as the minimum distance of the code. The Hamming approach is compared to the Shannon theory.

Typology: Lecture notes

2009/2010

Uploaded on 05/11/2023

shaukat54_pick
shaukat54_pick 🇺🇸

4.2

(21)

5 documents

1 / 11

Toggle sidebar

Related documents


Partial preview of the text

Download Introduction to Coding Theory and more Lecture notes Linear Algebra in PDF only on Docsity! Introduction to Coding Theory CMU: Spring 2010 Notes 1: Introduction, linear codes January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami The theory of error-correcting codes and more broadly, information theory, originated in Claude Shannon’s monumental work A mathematical theory of communication, published over 60 years ago in 1948. Shannon’s work gave a precise measure of the information content in the output of a random source in terms of its entropy. The noiseless coding theorem or the source coding theorem informally states that n i.i.d. random variables each with entropy H(X) can be compressed into n(H(X) + ε) bits with negligible probability of information loss, and conversely compression into n(H(X)− ε) bits would entail almost certain information loss. More directly relevant to this course is Shannon’s noisy coding theorem which considered com- munication of a message (say consisting of k bits that are output by a source coder) on a noisy communication channel whose behavior is given by a stochastic channel law. The noisy coding theorem states that every such channel has a precisely defined real number called capacity that quantifies the maximum rate at which reliable communication is possible on that channel. More precisely, given a noisy channel with capacity C, if information is transmitted at rate R (which means k = nR message bits are communicated in n uses of the channel), then if R < C then exist coding schemes (comprising an encoder/decoder pair) that guarantee negligible probability of miscommunication, whereas if R > C, then regardless of the coding scheme, the probability of error at the receiver is bounded below by some constant (which increased as R increases). (Later, a strong converse to the Shannon coding theorem was proved, which shows that when R > C, the probability of miscommunication goes exponentially (in k) to 1.) Shannon’s theorem was one of the early uses of the probabilistic method; it asserted the existence of good coding schemes at all rates below capacity, but did not give any efficient method to construct a good code or for that matter to verify that a certain code was good. We will return to Shannon’s probabilistic viewpoint and in particular his noisy coding theorem in a couple of lectures, but we will begin by introducing error-correcting codes from a more combina- torial/geometric viewpoint, focusing on aspects such as the minimum distance of the code. This viewpoint was pioneered by Richard Hamming in his celebrated 1950 paper Error detecting and error correcting codes. The Hamming approach is more suitable for tackling worst-case/adversarial errors, whereas the Shannon theory handles stochastic/probabilistic errors. This corresponds to a rough dichotomy in coding theory results – while the two approaches have somewhat different goals and face somewhat different limits and challenges, they share many common constructions, tools, and techniques. Further, by considering meaningful relaxations of the adversarial noise model or the requirement on the encoder/decoder, it is possible to bridge the gap between the Shannon and Hamming approaches. (We will see some results in this vein during the course.) The course will be roughly divided into the following interrelated parts. We will begin by results on the existence and limitations of codes, both in the Hamming and Shannon approaches. This will highlight some criteria to judge when a code is good, and we will follow up with several explicit constructions of “good” codes (we will encounter basic finite field algebra during these 1 constructions). While this part will mostly have a combinatorial flavor, we will keep track of important algorithmic challenges that arise. This will set the stage for the algorithmic component of the course, which will deal with efficient (polynomial time) algorithms to decode some important classes of codes. This in turn will enable us to approach the absolute limits of error-correction “constructively,” via explicit coding schemes and efficient algorithms (for both worst-case and probabilistic error models). Codes, and ideas behind some of the good constructions, have also found many exciting “ex- traneous” applications such as in complexity theory, cryptography, pseudorandomness and explicit combinatorial constructions. (For example, in the spring 2009 offering of 15-855 (the graduate com- plexity theory course), we covered in detail the Sudan-Trevisan-Vadhan proof of the Impagliazzo- Wigderson theorem that P = BPP under a exponential circuit lower bound for E, based on a highly efficient decoding algorithm for Reed-Muller codes.) Depending on time, we may mention/discuss some of these applications of coding theory towards the end of the course, though given that there is plenty to discuss even restricting ourselves to primarily coding-theoretic motivations, this could be unlikely. We now look at some simple codes and give the basic definitions concerning codes. But before that, we will digress with some recreational mathematics and pose a famous “Hat” puzzle, which happens to have close connections to the codes we will soon introduce (that’s your hint, if you haven’t seen the puzzle before!) Guessing hat colors The following puzzle made the New York Times in 2001. 15 players enter a room and a red or blue hat is placed on each person’s head. The color of each hat is determined by a coin toss, with the outcome of one coin toss having no effect on the others. Each person can see the other players’ hats but not his own. No communication of any sort is allowed, except for an initial strategy session before the game begins. Once they have had a chance to look at the other hats, the players must simultaneously guess the color of their own hats or pass. The group wins the game if at least one player guesses correctly and no players guess incorrectly. One obvious strategy for the players, for instance, would be for one player to always guess ”red” while the other players pass. This would give the group a 50 percent chance of winning the prize. Can the group achieve a higher probability of winning (probability being taken over the initial random assignment of hat colors)? If so, how high a probability can they achieve? (The same game can be played with any number of players. The general problem is to find a strategy for the group that maximizes its chances of winning the prize.) 1 A simple code Suppose we need to store 64 bit words in such a way that they can be correctly recovered even if a single bit per word gets flipped. One way is to store each information bit by duplicating it 2 1. C has minimum distance 2t+ 1. 2. C can be used to correct all t symbol errors. 3. C can be used to detect all 2t symbol errors. 4. C can be used to correct all 2t symbol erasures. (In the erasure model, some symbols are erased and the rest are intact, and we know the locations of the erasures. The goal is to fill in the values of the erased positions, using the values of the unerased positions and the redundancy of the code.) 3 Linear codes A general code might have no structure and not admit any representation other than listing the entire codebook. We now focus on an important subclass of codes with additional structure called linear codes. Many of the important and widely used codes are linear. Linear codes are defined over alphabets Σ which are finite fields. Throughout, we will denote by Fq the finite field with q elements, where q is a prime power. (Later on in the course, it is valuable to have a good grasp of the basic properties of finite fields and field extensions. For now, we can safely think of q as a prime, in which case Fq is just {0, 1, . . . , q − 1} with addition and multiplication defined modulo q.) Definition 7 (Linear code) If Σ is a field and C ⊂ Σn is a subspace of Σn then C is said to be a linear code. As C is a subspace, there exists a basis c1, c2, . . . , ck where k is the dimension of the subspace. Any codeword can be expressed as the linear combination of these basis vectors. We can write these vectors in matrix form as the columns of a n×k matrix. Such a matrix is called a generator matrix. Definition 8 (Generator matrix and encoding) Let C ⊆ Fn q be a linear code of dimension k. A matrix G ∈ Fn×k q is said to be a generator matrix for C if its k columns span C. The generator matrix G provides a way to encode a message x ∈ Fk q (thought of as a column vector) as the codeword Gx ∈ C ⊆ Fn q . Thus a linear code has an encoding map E : Fk q → Fn q which is a linear transformation x→ Gx. Comment: Many coding texts define the “transpose” version, where the rows of the k×n generator matrix span the code. We prefer the above definition since it is customary to treat vectors as column vectors (even in these coding texts) and it is therefore nice to multiply by vectors on the right and avoid taking transposes of vectors. Note that a linear code admits many different generator matrices, corresponding to the different choices of basis for the code as a vector space. Notation: A q-ary linear code of block length n and dimension k will be referred to as an [n, k]q code. Further, it the code has minimum distance d, it will be referred to as an [n, k, d]q code. When the alphabet size q is clear from the context, or not very relevant to the discussion, we omit the subscript. 5 Example 3 Some simple examples of binary linear codes: • The binary parity check code: This is an [n, n − 1, 2]2 code consisting of all vectors in Fn 2 of even Hamming weight. • The binary repetition code: This is an [n, 1, n]2 code consisting of the two vectors 0n and 1n. • The Hamming code discussed above is a [7, 4, 3]2 linear code. Exercise 1 Show that for a linear code C, its minimum distance equals the minimum Hamming weight of a nonzero codewords of C, i.e., ∆(C) = min c∈C c6=0 wt(c) . Exercise 2 (Systematic or standard form) Let C be an [n, k]q linear code. Prove that after permuting coordinates if necessary, C has a generator matrix of the form [Ik | G′]T where Ik is the k × k identity matrix and G′ is some k × (n− k) matrix. A generator matrix in the form [Ik | G′]T is said to be in systematic form. When such a generator matrix is used for encoding, the encoding is called systematic: the first k symbols of the codeword are just the message symbols, and the remaining n−k symbols comprise redundant check symbols. Thus, after permuting coordinates if needed, every linear code admits a systematic encoding. The above-mentioned encoding map for the [7, 4, 3] Hamming code was systematic. 3.1 Parity check matrices The following is a good way to flex your basic linear algebra muscles (Exercise 2 is a useful way to proceed): Exercise 3 Prove that C is an [n, k]q code if and only if there is a matrix H ∈ F(n−k)×n q of full row rank such that C = {c ∈ Fn q | Hc = 0} . In other words, C is the nullspace of H. Such a matrix H is called a parity check matrix for C. A linear code can thus be compactly represented by either its generator matrix or its parity check matrix. The minimum distance of a code has the following characterization in terms of the parity check matrix. Lemma 9 If H is the parity check matrix of a linear code C, then ∆(C) equals the minimum number of columns of H that are linearly dependent. 6 3.2 Hamming code revisited The Hamming code is best understood by the structure of its parity check matrix. This will also allow us to generalize Hamming codes to larger lengths. We defined the CHam = [7, 4, 3]2 Hamming code using generator matrix G =  1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 1 1 1 0 1 1 1 1 0 1  . If we define the matrix H = 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 1 0 1 0 1  , then one can check that HG = 0 and that H is a parity check matrix for CHam. Note that H has a rather nice structure: its columns are the integers 1 to 7 written in binary. Correcting single errors with the Hamming code: Suppose that y is a corrupted version of some (unknown) codeword c ∈ CHam, with a single bit flipped. We know by the distance property of CHam that c is uniquely determined by y. In particular, a naive method to determine c would be to flip each bit of y and check if the resulting vector is in the null space of H. A more slick (and faster) way to correct y is as follows. We know that y = c + ei where ei is the column vector of all zeros except a single 1 in the i’th position. Note that Hy = H(c + ei) = Hc + Hei = Hei = the ith column of H. The ith column of H is the binary representation of i, and thus this method recovers the location i of the error. Definition 10 (Syndrome) The vector Hy is said to be the syndrome of y. Generalized Hamming codes: Define Hr to be the r × (2r − 1) matrix where column i of Hr is the binary representation of i. This matrix must contain e1 through er, which are the binary representations of all powers of two from 1 to 2r−1, and thus has full row rank. Now we can define the r’th generalized Hamming code C (r) Ham = {c ∈ F2r−1 2 | Hrc = 0}. to be the binary code with parity check matrix Hr. Lemma 11 C (r) Ham is an [2r − 1, 2r − 1− r, 3]2 code. Proof: Since Hr has rank r, it follows that the dimension of C(r) Ham equals r. By Lemma 9, we need to check that no two columns of Hr are linearly dependent, and there are 3 linearly dependent 7 Definition 16 (Hadamard code) The binary Hadamard code Hadr is a [2r, r]2 linear code whose 2r × r generator matrix has all r-bit vectors as its rows. Thus the encoding map for the Hadamard code encodes x ∈ Fr 2 by a string in F2r 2 consisting of the dot product 〈x, a〉 for every a ∈ Fr 2. The Hadamard code can also be defined over Fq, but encoding a message in Fk q with its dot product with every vector in Fk q . We note that the Hadamard code is the most redundant linear code in which no two codeword symbols are equal in every codeword. Hadamard codes have excellent distance property: Lemma 17 The Hadamard code Hadr (as well as the Simplex code) has minimum distance 2r−1. The q-ary Hadamard code of dimension r has distance (1− 1/q)qr. Proof: We prove that for x 6= 0, 〈x, a〉 6= 0 for exactly 2r−1 (i.e., half of the) elements a ∈ Fr 2. Assume for definiteness that x1 = 1. Then for every a, 〈x, a〉+ 〈x, a+ e1〉 = x1 = 1, and therefore exactly one of 〈x, a〉 and 〈x, a+ e1〉 equals 1. The proof for the q-ary case is similar.  We will later see that binary codes cannot have relative distance more than 1/2 (unless they only have a fixed constant number of codewords). Thus the relative distance of Hadamard codes is optimal, but their rate is (necessarily) rather poor. Comment: The first order Reed-Muller code is a code that is closely related to the Hadamard code. Linear algebraically, it is simply the subspace spanned by the Hadamard code and the all 1’s vector (i.e., the union of the Hadamard code H and its coset H + 1). It maps a message m1,m2, . . . ,mr,mr+1 to (m1a1+· · ·+mrar +mr+1)a∈Fr 2 , or equivalently the evaluations (M(a))a∈Fr 2 of the r-variate polynomial M(X1, X2, . . . , Xr) = ∑r i=1miXi +mr+1. It is a [2r, r+ 1, 2r−1]2 code. We will later see that no binary code of block length n and relative distance 1/2 can have more than 2n codewords, so the first order Reed-Muller code is optimal in this sense. Code families and Asymptotically good codes The Hamming and Hadamard codes exhibit two extremes in the trade-off between rate and distance. Hamming codes have rate approaching 1 (and in fact optimal rate) but their distance is only 3. Hadamard codes have (optimal) relative distance 1/2, but their rate approaches 0. A natural question this raises is whether there are codes which have both good rate and good relative distance (say, with neither of them approaching 0 for large block lengths). To formulate this question formally, and also because our focus in this course is on the asymptotic behavior of codes for large block lengths, we consider code families. Specifically, we define a family of codes to be an infinite collection C = {Ci|i ∈ N} where Ci is a qi-ary code of block length ni with ni > ni−1 and qi ≥ qi−1. Most of the constructions of codes we will encounter in this book will naturally belong to an infinite family of codes that share the same general structure and properties (and it is usually these asymptotic properties that will often guide our constructions). We have defined the alphabet sizes qi of the codes in the family to also grow with i (and ni). Code families where qi = q for all i for some fixed q will be of special interest (and turn out to be more 10 challenging to understand and construct). We will call such families as code families over a fixed alphabet or more specifically as q-ary code families. The notions of rate and relative distance naturally extend to code families. The rate of an infinite family of codes C is defined as R(C) = lim inf i { ki ni } . The (relative) distance of a family of codes C equals δ(C) = lim inf i { ∆(Ci) ni } . A q-ary family of codes is said to be asymptotically good if both its rate and relative distance are bounded away from zero, i.e., if there exist constants R0 > 0 and δ0 > 0 such that R(C) ≥ R0 and δ(C) ≥ δ0. In this terminology, the question raised above concerning (binary) codes with both good rate and good distance, can be phrased as: “Are there asymptotically good binary code families?” For a code family, achieving a large rate and large relative distance are naturally conflicting codes, and there are fundamental trade-offs between these. We will study some of these in the course. A good deal is known about the existence (or even explicit construction) of good codes and as well as limitations of codes, but the best possible asymptotic trade-off between rate and relative distance for binary codes remains a fundamental open question. In fact, the asymptotic bounds have seen no improvement since 1977! 11
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved