Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Channel Coding Theory: Block & Convolutional Codes, Decoding Algorithms, Capacity, Study Guides, Projects, Research of Electrical and Electronics Engineering

An overview of channel coding theory, including the background knowledge of block and convolutional codes, decoding algorithms, and shannon-hartley capacity theory. It also discusses applications to space and satellite communication systems, such as mariner, pioneer, nasa/esa planetary standard codes, and voyager systems. The document also covers the viterbi and sequential convolutional decoding algorithms, as well as the shannon-hartley capacity theorem.

Typology: Study Guides, Projects, Research

Pre 2010

Uploaded on 09/02/2009

koofers-user-xra
koofers-user-xra 🇺🇸

10 documents

1 / 23

Toggle sidebar

Related documents


Partial preview of the text

Download Channel Coding Theory: Block & Convolutional Codes, Decoding Algorithms, Capacity and more Study Guides, Projects, Research Electrical and Electronics Engineering in PDF only on Docsity! Channel Coding and its applications To Space and Satellite Communications EE521, Fall, 2000 Instructor: Dr. Steve F. Russell Submitter: Xiaowen Liu Dpt. of Electrical Engineering and Computer Engineering Iowa State University December 4, 2000 EE 521 Project Xiaowen Liu December. 4, 2000 1 Abstract Shannon’s capacity theory has shown that it is theoretically possible to transmit information over a channel at any rate R where R ≤ C with an arbitrary small error probability. The background knowledge of channel coding theory including block and convolutional codes, decoding algorithms and channel capacity theory are reviewed. The developments of channel coding techniques, which allow channel capacity to be approached during the last 50 years in the applications of space and satellite communications, are surveyed. We can see a great advance has occurred in practical systems design that narrow the gap between the real system and the capacity bound. The examples chosen clearly illustrate the importance of the channel coding methods in modern digital applications. EE 521 Project Xiaowen Liu December. 4, 2000 4 Introduction In 1948, Shannon demonstrated that by proper encoding of the information, error induced by a noisy channel could be reduced to any desired level without sacrificing the rate of the information transmission. Since Shannon’s work, a great deal of effort has been made on the problem of designing efficient encoding and decoding methods for error detection and correcting in a noisy environment. Types of Codes There are two types of codes commonly used today, block codes and convolutional codes. Figure 1. A Simplified model of block code encoder Block codes are a class of parity check codes. We use the (n,k,d) notation in this report. Figure 1 gives a simplified model of block code encoder. The encoder transforms a block of k message bits into a longer block of n bits (a code word). The (n-k) bits, which the encoder adds to each data block, are called redundant bits, parity bits, or check bits; they carry no information. K/n is called the code rate. The minimum Information Source Encoder Input: k bits Codeword: n bits EE 521 Project Xiaowen Liu December. 4, 2000 5 Hamming distance of this code is d. In general, the error correcting capability, t, of a code is defined as the maximum number of guaranteed correctable errors per codeword, and is written as The error-detecting capability, e, is defined as e=d-1 Another type code is convolutional code. We use (n, k, K) notation to describe a convolutional code. Here, n does not define a block or codeword length as it does in block code. The integer K is a parameter known as the constraint length; it represents the number of k-tuple stages in the encoding shift register. Figure 2 gives an example of (2, 4, 2) convolutional encoder to help us understand the constraint length K. In the (2, 4, 2) case, the message bits are shifted into the encoder k=2 bits at a time. Figure 2. (2, 4, 2) convolutional encoder     −= 2 1d t 1 + + + + Input k=2 Constraint length K=2 2 3 4 EE 521 Project Xiaowen Liu December. 4, 2000 6 Sequential Convolutional Decoding Algorithm The earliest convolutional decoding algorithm is the sequential decoding algorithm, originally proposed by Wozencraft and subsequently modified by Fano. A sequential decoder works by generating hypotheses about the transmitted code word sequence. It computes a metric between these hypotheses and the received signal. It goes forward if the metric indicates that its choices are likely; otherwise, it goes backward and changes the hypothesis until it finds a likely hypothesis. The number of states it searched is independent of constraint length, thus make it possible to decode very large constraint length convolutional codes. The major disadvantage of sequential decoding is that the number of the state metrics searched is a random variable. Under low SNR, the received sequences must be buffered when the decoder is searching a likely hypothesis. If the average symbol arrival rate exceeds the average symbol decode rate, the buffer will overflow, causing a data loss. Thus an important part of sequential decoder is the probability of buffer flow. Viterbi Convolutional Decoding Algorithm Viterbi discovered the Viterbi decoding algorithm in 1967. It greatly reduces the computational load by taking advantage of the special structure in the code trellis. Figure 3. gives a (1,2,3) convolutional encoder. Figure 4 gives the trellis diagram of this encoder. In Figure 4, the branch words are the code symbols that would be expected to come from the encoder output as a result of each of the state transitions. In Figure 5, each branch of the decoder trellis is EE 521 Project Xiaowen Liu December. 4, 2000 9 Viterbi algorithm states that if any two paths in the trellis merge to a single state, one of them can always be eliminated in the search for an optimum path. Figure 6. gives an example. It shows two paths merging at time t5 to state 00. Here we define the cumulative Hamming path metric of a given path at time ti as the sum of the branch Hamming distance metrics along the path up to time ti. In Figure 6, the upper path has a higher metric than the lower path, so it cannot be a part of the optimum path, so we eliminate the upper path. Viterbi decoding consists of computing the metric for the two paths entering each state and eliminating one of them. The major disadvantage of Viterbi algorithm is that while error probability decreases exponentially with constraint length, the number of code states, and decoding complexity, grows exponentially with constraints length. But the computational complexity is independent of channel characteristics. On the other hand, for sequential decoder, under low SNR, large delay will caused due to the decoder need longer time to find a likely path under high noisy channel. Shannon-Hartley Capacity Theorem Shannon’s capacity theory states that the capacity C of a band limited additive white Gaussian noise (AWGN) channel with bandwidth W, a channel model that approximately represents many practical digital communication systems is given by (1) )/1(log 02 sencodperbitsNEWC s+= EE 521 Project Xiaowen Liu December. 4, 2000 10 Es is the Average signal energy in each signaling interval of T=1/W, N0 is the Average Noise Power. In this formulation of the capacity theorem, one quadrature signal is transmitted in each T-second signaling interval, and the nominal channel bandwidth W=1/T. The theorem shows that for any transmission rate R less than or equal to the channel capacity C, there exists a coding scheme that achieves an arbitrary small probability. On the other hand, if the code rate R is greater than C, no coding scheme can achieve reliable performance. For more useful current discussion, we introduce the parameter ç, called the spectral (or bandwidth) efficiency. ç represents the average number of information bits transmitted per signaling interval of time duration T seconds. And Es/ N0 = ç Eb /N0 (3) Where Eb is the average energy per information bit. Substitute (3) into (1), we can get Es/ N0 >(2 ç-1)/ ç (4) The bound of (4) expresses the fundamental tradeoff between the spectral efficiency and SNR. In Figure 7, the capacity bound represents the bound of (4). Capacity can be interpreted as giving the minimum SNR required to achieve a specific spectral efficiency with an arbitrary small probability of error. For example, in Figure 7, an uncoded QBSK has ç=2, achieve BER=10-5 at Eb /N0=9.6 dB, coding can provide approximately error free communication at the same (2) )/1(log/0 02 NEWC bηη +=<< EE 521 Project Xiaowen Liu December. 4, 2000 11 spectral efficiency and an Eb /N0=1.8. In this case, Maximum Power (or Coding) Gain is 9.6-1.8=7.8 dB. Figure.7 Capacity curves and the performance of several coding schemes On the other hand, capacity bound can be interpreted as giving the maximum spectral efficiency at which a system may operate reliably for a fixed SNR. For example: uncoded QPSK has ç=2, achieve BER=10-5 at Eb/N0 =9.6 dB. Coding can provide reliable communication at a spectral efficiency of ç=5.7, achieve BER=10-5 at Eb/N0 =9.6 dB. We say that the maximum Spectral Efficiency (or Rate) Gain is 5.7-2=3.7 bits per signal. The introduction part gives the background of what we need for the later discussion. In the later part, we will survey the progress made in applying coding techniques to the area of space EE 521 Project Xiaowen Liu December. 4, 2000 14 (n, k, d)=(2k-1 ,k, 2k-2). The full soft decoding algorithm that based on the Hadamard transform was developed by Green at the Jet Propulsion Laboratory (JPL). It was later called as Green machine. Code Rate (bits/signal) Eb/N0 (dB) Uncoded BPSK 1.0 9.6 Mariner System 6/32 6.4 Coding Gain (dB) — 3.2 Bandwidth Expense 32/6 — Table 1. The performance of Mariner system with (32,6,16) RM code Table 1 gives the performance of the (32,6,16) RM code. We can see that even with the significant bandwidth expansion, the coding gain achieved by Mariner system was very modest. This is due to the RM code as a typical block code has a large number of nearest neighboring code words. Pioneer System A significant progress in the application to deep space communication is the invention of sequential decoding in the later 1960’s. It allows the powerful long constraint length convolutional codes with soft-decision decoding. Thus in 1972 the Pioneer 10 Jupiter fly-by mission and in 1973 the Pioneer 11 Saturn fly-by mission both use a (2,1,32) nonsystematic convolutional code constructed by Massey and Costello. A sequential decoder was used in this system. Table 2 gives the performance of the Pioneer system. It brought a significant improvement compared to the Mariner system due to the power long constraint length convolutional code. EE 521 Project Xiaowen Liu December. 4, 2000 15 However, as we described in the introduction part, sequential decoding has a problem of the probability of buffer overflow due to variable decoding delay. Therefore, the performance of convolutional codes with sequential decoding is finally limited by the computational cutoff rate R0 as shown in Figure 8. It is the rate at which the average number of computations by a sequential decoder becomes unbounded. Code Rate (bits/signal) Eb/N0 (dB) Uncoded BPSK 1.0 9.6 Pioneer System 1/2 2.7 Coding Gain (dB) — 6.9 Bandwidth Expense 2/1 — Table 2. The performance of Pioneer system with (2,1,32) nonsystematic convolutional code & sequential decoder NASA/ESA Planetary Standard Codes To solve the cutoff rate problem of the sequential decoding, Viterbi algorithm was used in the decoder. Viterbi decoding has fixed delay, thus it is not limited by computational cutoff rate. In 1977 Voyager 1 and 2 space missions to Jupiter and Saturn used (2,1,7) convolutional code with Viterbi decoding. This code and (3, 1, 7) convolutional code were both adopted as NASA/ESA Planetary Standard Codes by the Consultative Committee on Space Data System (CCSDS) Table 3 shows that the coding gain of the Planetary (2,1,7) convolutional code is 5.1 dB, which is 1.8 dB worse than the pioneer’s 6.9 dB. This is due to the short constraint length is used. EE 521 Project Xiaowen Liu December. 4, 2000 16 Code Rate (bits/signal) Eb/N0 (dB) Uncoded BPSK 1.0 9.6 (2,1,7) Planetary Codes 1/2 4.5 Coding Gain (dB) — 5.1 Bandwidth Expense 2/1 — Table 3. The performance of Planetary (2,1,7) convolutional code & Viterbi decoder We already know from the introduction that the Viterbi algorithm’s complexity grows exponentially with the constraints length, so just K=7 was used. In the 1980’s, Planetary Standard Codes also played a major role in military satellite communications. In general, Convolutional encoding and Viterbi decoding will continue to be used in earth-orbiting satellite communication systems into the new century. The Globalstar and Iridium systems use K=9, rate =1/2 and K=7, rate=3/4 convolutional codes. Concatenated Voyager System More coding gain can be achieved by using code concatenation that was introduced by Forney. Concatenation is a scheme in which two codes; an “inner code” and an “outer code” are used in cascade. The inner code should be able to produce a moderate BER with modest complexity, the outer coder can be more complex and should be able to correct all the residual errors from the inner decoder. CCSDS Telemetry Standard is a combination of a short constraint length inner Convolutional code with Viterbi decoding and a powerful Reed-Solomon (RS) outer code as shown in Figure 9. EE 521 Project Xiaowen Liu December. 4, 2000 19 the Voyager system. Table 4 gives the performance comparison of these systems. Turbo Codes Another method to improve the performance of the CCSDS concatenation standard is the iterative decoding. Hagenauer and Hoeher proposed an approach with the introduction of the Soft-Output Viterbi Algorithm (SOVA). In the SOVA, reliability information about each decoded bit is appended to the output of a Viterbi decoder. An outer decoder that accepts soft input can use this reliability information to improve it performance. If the outer decoder also provides reliability information as its output, iterative decoding is proceeding between the inner and outer decoder. Generally, this iterative decoding technique can bring additional coding gain up to about 1.0 dB. A significant new discovery — turbo code were introduced by Berrou, Glavieux and Thitimajshima. Turbo codes combine a convolutional code along with a pseudorandom interleaver and maximum a posteriori probability (MAP) iterative decoding to achieve performance very close to Shannon’s capacity boundary. As shown in Figure 10, an information bit sequence is encoded in a simple recursive systematic rate 1/2 convolutional encoder to generate one check bit sequence. The same information bit sequence is permuted in a very long interleaver and then encoded in a second recursive systematic rate 1/2 convolutional encoder to generate a second check bit sequence. The information bit and both the check bits are transmitted, thus the code rate is 1/3. Decoding s as following: The received EE 521 Project Xiaowen Liu December. 4, 2000 20 sequences corresponding to the information bit sequence and the first check bit sequence are decoded by soft-decision decoder for the first convolutional code. The output of this decoder is a sequence of soft decisions for each bit of the information sequence. These soft-decision — the reliability information are used by a similar decoder for the second convolutional code, which hopefully generates better soft decision that can be used by the first decoder for the next iteration. Decoding in this way for 10~20 cycles, hard decision s are finally made on the information bits. Figure 10. The “turbo” encoding/decoding system EE 521 Project Xiaowen Liu December. 4, 2000 21 The major disadvantage of a turbo code is its long decoding delay due to the iterative decoding and it’s weak performance at lower BER’s. From Figure 8 and Table 4, we can see that the performance of turbo code is 3.8 dB better than the Planetary (2,1,7) code with the same decoding complexity and is also 1.0 dB better than the BVD code and two times spectral efficiency use of BVD code. A turbo coding scheme is now being standardized for future deep space missions. Conclusions In this project report, we survey the development and progress of channel coding in the application of space and satellite communications. We can find the significant improvements in the coding techniques are always related to the decoding methods. Different decoding methods were come up with to solve the problem of the old decoding methods. All the decoding methods have their own advantage and disadvantage, designers have to try to find the tradeoff between the coding gain, computations complex, delay bound and spectral efficiency. By reviewing the history of different space missions, we can come to the following conclusions. 1. Convolutional codes became the most choice in applications because the relative ease of soft-decision decoding of Convolutional codes. 2. Parallel concatenated convolutional codes and iterative decoding will replace convolutional codes in many applications that can tolerate significant decoding delay. Turbo code is a very good illustration that creative use of
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved