GB2386039A - Dual termination of turbo codes - Google Patents

Dual termination of turbo codes Download PDF

Info

Publication number
GB2386039A
GB2386039A GB0204910A GB0204910A GB2386039A GB 2386039 A GB2386039 A GB 2386039A GB 0204910 A GB0204910 A GB 0204910A GB 0204910 A GB0204910 A GB 0204910A GB 2386039 A GB2386039 A GB 2386039A
Authority
GB
United Kingdom
Prior art keywords
encoding
series
decoding
data
data items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0204910A
Other versions
GB2386039B (en
GB0204910D0 (en
Inventor
Luke Hebbes
Peter Raymond Ball
Ronald Roy Malyan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to GB0204910A priority Critical patent/GB2386039B/en
Publication of GB0204910D0 publication Critical patent/GB0204910D0/en
Publication of GB2386039A publication Critical patent/GB2386039A/en
Application granted granted Critical
Publication of GB2386039B publication Critical patent/GB2386039B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • H03M13/2993Implementing the return to a predetermined state, i.e. trellis termination
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3933Decoding in probability domain

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention is concerned with terminating both encoders in a turbo code arrangement (parallel-concatenated recursive systematic convolutional codes) using separate tail bit series to force a predetermined state in the each of the encoders, such as a zero state. The arrangement comprises a first encoding unit 50 for encoding input data, an interleaving unit 60 for reordering the input data, and a second encoding unit 70 for encoding the reordered input data. The arrangement also comprises first and second termination units 80, 90 for generating corresponding first and second series of tail bits which are encoded after the data encoding operations so as to drive the encoding units to a predetermined final state. A turbo decoder (fig. 20) performs an iterative turbo decoding operation in which the backward passes in a second decoding unit are initialised based on the predetermined final state of the second encoding unit.

Description

<Desc/Clms Page number 1>
DATA ENCODING AND DECODING APPARATUS AND A DATA ENCODING AND DECODING METHOD The present invention relates to data encoding and decoding apparatus and a data encoding and decoding method, for example for encoding and decoding data which are to be transmitted in a communications network, particularly a communications network prone to attenuation and noise.
The DSL (Digital Subscriber Line) family of communications channels is able to provide highbandwidth transmission capability over relatively noisy, poor-quality lines, for example over the existing POTS (Plain Old Telephone Service) copper twisted-pair lines which remain in general use for telephony. For example, in the case of ADSL (Asymmetric DSL), data rates of between 16 and 640 kbps upstream and between 1.5 to 9 Mbps downstream can be provided.
Because of inherent performance limitations associated with any communications channel, and the need to maintain a realistic power budget, all channels have a speed-distance trade-off, that is to say that if transmission is to be over a greater distance then the data rate must be reduced. Similarly, if the data rate is to be increased then the maximum achievable transmission distance will be reduced. For example, the following table represents a typical speed-distance relationship for downstream ADSL services:
Data rate Maximum cable length 1.5 Mbps 5. 5 km 2.1 Mbps 4.9 km 6.3 Mbps 3. 7 km 8.5 Mbps 2.7 km
<Desc/Clms Page number 2>
For VDSL (Very high-speed DSL) much higher data rates of up to 55 Mbps are envisaged, and a typical speed-distance relationship might be as follows:
Data rate Maximum cable length 13 Mbps 1, 500 m 26 Mbps 1,000 m 55 Mbps 300 m
As is apparent from these tables, the achievable range (and therefore number of potential customers) is severely limited when using ADSL over POTS lines, and is especially so for VDSL at high data rates.
Providing a broadband access network with optical fibre all the way to the home (FTTH) can be prohibitively expensive. An alternative is a combination of fibre cable feeding neighbourhood optical network units (ONUs), with the"last mile" connection being by way of existing or new copper.
This fibre-to-the-neighbourhood (FTTN) topology encompasses fibre-to-the-curb (FTTC) with short drops and fibre-to-the-basement (FTTB) serving tall buildings with vertical drops.
The DSL family of technologies provides an enabling technology for FTTN, although as shown above the reach of the final copper link can be very limited, especially in the case of VDSL. The use of channel coding is desirable on DSL channels not only because the channels are not very reliable, but also so that the "speed-distance'curve" can be pushed further. Fig.
1 of the accompanying drawings illustrates how the achievable data rate (speed) typically drops off with increasing transmission distance. With effective channel coding techniques the curve can be shifted away from the origin. Therefore, transmission can be achieved either at a higher speed for the same distance or the same speed over a greater distance.
<Desc/Clms Page number 3>
The speed-distance trade-off is an important consideration for telecommunications companies who wish to offer full-rate DSL to their customers. Rather than incurring the expense of laying new cables and installing new ground equipment better channel coding could instead be employed to reach the customers beyond the normal reach of their current technology. With improved channel coding this distance could be increased to cover a greater proportion of the population.
There is an increasing demand for such broadband multimedia services to be delivered over access links to individual consumers. These access links will often have a low utilisation but will also have the requirement that their costs are to be kept to a minimum. To achieve the low cost there is pressure for these access links to be relatively narrow-band.
Obvious applications of these coding techniques are in broadcast communications, from satellite to mobile telephone networks, which are very narrow-band.
Other examples of links that are now being introduced are cable modems, to allow cable operators to provide interactive digital TV and Web services to their customers, and ADSL technology to extend the bandwidth of normal telephone lines as described above.
ADSL technology will then allow traditional telecommunication service operators to extend the use of their telephony access network for the provision of broadband services similar to those to be offered by the cable operators.
Digital TV typically requires data rates in excess of 100 Mbps and cable modems will typically have data rates of 4 Mbps to be shared between several customers.
For ADSL this might be 1.5 Mbps for a single customer.
There is, therefore, a requirement for source coding to compress the source data so as to reduce the source data rate to that which can be accommodated by the
<Desc/Clms Page number 4>
access technology. The source coding that has been adopted for the provision of interactive digital TV services is MPEG-2. Although this method of source coding is effective at reducing the source data rate to that which can be transported on cable modems and ADSL technology, it does make the resultant code very "brittle"and hence very susceptible to errors. Raw MPEG-2 data requires Bit Error Rates (BER) in the region of 10-9 to provide an acceptable service.
Due to the stringent demands placed on the access links by source coding, such as MPEG-2, it is generally necessary also to apply channel coding. With channel coding, redundancy is introduced into the source data to allow for error detection and correction. These coding schemes will be required to produce a high coding gain so that such low BERs over copper lines can be achieved. The access links can be modelled in different ways, but they have the underlying structure of Additive White Gaussian Noise (AWGN) channels.
Channel coding methods have to be adopted that provide the required error performance at an acceptably low cost.
The theoretical limits to channel coding were defined by Shannon in 1948. Formulating the main variables in determining the maximum data rate from a discrete-input memoryless channel for an arbitrarily small error rate, the theoretical maximum data rate for
a transmission medium was shown to be given by the following formula :
-s (N
where B is the channel bandwidth, and SIN is the Signal-to-Noise Ratio (SNR) expressed as a power ratio.
From Equation (1) the normalised channel capacity versus the channel SNR can be plotted, and this is shown in Fig. 2 of the accompanying drawings.
<Desc/Clms Page number 5>
This plot shows asymptotic behaviour. Practical systems can exist only below this curve. Equation (1) sets a limit on transmission rate, not on error probability. However, there does exist a limiting value of SNR, below which there can be no error-free communication at any information rate. This limiting value of SNR can be calculated as follows. Firstly,
the following identity is used :
lim (l + x) llx = e xo r/'r'\ Now, let No B . N B Then, from Equation (1) : - = xlog2 (1 +xtx E l=tog, (l+ No Finally, taking the limit as C/B -7 0 : Eb i z 0. 693 No log2 e or, in decibels -1. 59 dB
This value of the SNR is called the Shannon Limit.
It is not possible, in practice, to reach the Shannon Limit, since it is asymptotic and, therefore, unbounded. It would require unbounded bandwidth and implementation complexity to achieve this limit.
However, with good coding techniques, this limit can be approached with practical systems.
Three ways to improve the error performance of such channels can be identified. The first is to reduce the code rate of the channel coding by increasing the error correction capability of the channel code. Unfortunately, this increase in redundancy has the problem of increasing channel bandwidth, which will increase cost. The second is to increase the channel capacity by increasing the signal power. This has the disadvantages of increasing costs
<Desc/Clms Page number 6>
and crosstalk interference between channels. The third is to increase the code symbol length, but again this has the disadvantage of increasing the size and complexity of the source code, and hence cost.
Although POTS (plain old telephone service) copper twisted-pair (CTP) lines are not high quality they are generally more reliable than a satellite or a mobile link. Also, the data rates on POTS lines are much higher than on mobile links, and consumers require real-time transmission without the inherent delay incurred when utilising satellites. Thus a different set of criteria arises when using channel coding on DSL channels compared with their use on satellite links, for example.
The delay incurred by channel coding must be kept to a minimum. As will be described in further detail below, this can be achieved by limiting various internal parameters of the encoder and/or the decoder, such as the interleaver size, the encoder constraint length and the number of decoder iterations (these terms are explained below). However, reducing any of these will have the side effect of reducing the performance of the code. Although DSL channels are generally more reliable than mobile or satellite channels, they must still perform well with the minimum of delay and it is therefore desirable to find ways of reducing delay whilst having the minimum impact on the overall performance, and conversely ways to improve performance without increasing the delay.
Turbo codes, which were introduced by Berrou, Glavieux and Thitimajshima in 1993, have been shown to have performance within 1 dB of the theoretical channel capacity defined by Shannon as described above. In general, the closer to the Shannon Limit a code performs the more complex the decoder is. This decoder complexity becomes prohibitively great, such that suboptimal codes and decoding have to be used.
<Desc/Clms Page number 7>
Turbo codes do outperform many of the more traditional coding techniques, but the complexity of the decoder is still an issue. In general, the more complex the decoding is the longer it will take to perform. As has been discussed above, it is desirable that delay on DSL channels due to channel coding is reduced as far as possible. So-called'good'codes must, therefore, be designed to reduce delay whilst maintaining performance.
Turbo codes use an iterative decoding algorithm and have been observed generally to converge to a reliable codeword. One problem with turbo codes is that they appear to reach a performance floor at a BER of about 10-6. This is not actually an error floor, but it appears as such due to the fact that turbo codes exhibit such a steep curve at lower SNR. This performance floor can be pushed to lower BER by changing some of the properties of turbo codes, some of which have already been described.
It is desirable to modify the standard turbo code structure to produce a coding method by which the performance floor can be pushed to lower BER.
It is also desirable to devise a coding scheme which allows good performance with relatively small data block sizes and relatively few decoder iterations.
It is also desirable to devise methods by which turbo codes can be designed to have improved performance without increasing the inherent delay.
According to a first aspect of the present invention there is provided data encoding and decoding apparatus comprising: encoding means for performing a first encoding operation to encode a basic series of data items into a first series of encoded data items and for performing a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items;
<Desc/Clms Page number 8>
terminating means for generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations; and a turbo decoder having first and second decoding means corresponding respectively to said first and second encoding operations for performing an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
An embodiment of the present invention provides an advantage that the backward passes in the second decoding means, corresponding to the second encoding operation which encodes said reordered series of data items, can be initialised based on a known state. This additional information allows the weights of many state transitions in the decoding means to be set to zero and results in improved performance, allowing smaller data block sizes and/or fewer decoder iterations. In another embodiment the backward passes in the first decoding means are also initialised based on a known state, being the predetermined final state of the encoding means following the first encoding operation.
When the encoding means are initialised to a predetermined starting state, the forward passes in the decoding means can also be initialised based on that starting state. The data items can be any type of information, for example data or control information.
In one embodiment of the present invention, said turbo decoder is operable to pass extrinsic information relating to the data items from one decoding means to another, and to retain extrinsic information relating to the tail items for use only within the decoding
<Desc/Clms Page number 9>
means to which the tail items relate without passing it to another decoding means. The extrinsic information may represent the likelihood of a decoded data item in said series of decoded data items being of a particular type. There is no need for extrinsic information relating to the tail to be passed between constituent decoding units since the decoding units operate on different tails.
In one embodiment of the present invention said encoding means comprise a single encoding unit which performs said first and second encoding operations one after the other. This allows for a simple construction.
In another embodiment said encoding means comprise first and second encoding units to perform said first and second encoding operations respectively. The first and second encoding units may perform their respective encoding operations in parallel, thus allowing for faster encoding speed.
In an embodiment of the present invention, the or each encoding unit may be, for example, a convolutional encoder, a recursive encoder, or a recursive systematic encoder. The use of other types of encoding schemes in the encoders is also possible, for example ring codes.
The present invention provides an advantage that it is straightforward to provide a first encoding unit which is of a different structure to the second encoding unit, since each encoding unit can be terminated separately according to the structure of that encoding unit. Thus the generator of the first encoding unit may be different to that of the second encoding unit, while the constraint length and order of the first and second encoding units must be the same.
The predetermined final state can be the zero state, or any other state so long as it is known. The final state of one encoding unit need not be the same as that of another encoding unit. Likewise, the predetermined starting state can be the zero state, or
<Desc/Clms Page number 10>
any other state so long as it is known. The starting state of one encoding unit need not be the same as that of another encoding unit.
Interleaving means may be provided for producing the reordered series of data items by first appending the first basic series of tail items to the basic series of data items and then permuting the resulting combined series of items. In this way the first basic series of tail items is encoded by the first and second encoding units.
Apparatus embodying the present invention is able to terminate both encoding units to a known state whilst still being able to permute the basic series of items in a pseudo-random fashion to produce the reordered series of data items encoded by the second encoding unit. The ability to use a pseudo-random interleaver is particularly beneficial in communications applications since it provides a reordered (or interleaved) series of data items having an order lacking any apparent relationship with the order of the basic (or systematic) series of data items from which the interleaved series is derived. This provides particularly good immunity to noise and good overall performance.
When the basic series of data items is transmitted from the encoding means to the turbo decoder, the basic series of data items does not have to be transmitted in the original order but may be permuted before transmission and then de-permuted upon reception to return the series to the original order. The first and/or the said second basic series of tail items may also be transmitted to the turbo decoder.
In an embodiment of the present invention, the encoding means may be further operable to perform one or more further encoding operations to encode one or more corresponding further reordered series of data items, being the basic series of data items arranged in
<Desc/Clms Page number 11>
different respective orders, into one or more corresponding further series of encoded data items. In this case, the terminating means may also be operable to terminate the encoding means following at least one of the one or more further encoding operations by generating at least one corresponding further basic series of tail items which is or are encoded by said encoding means after the at least one further encoding operation and which is or are suitable to drive said encoding means to a predetermined final state following that operation or those operations. This produces at least one corresponding further series of encoded tail items. Some benefit may still be obtained from the present invention even if not all of such further encoding operations are terminated.
According to a second aspect of the present invention there is provided a data encoding and decoding method comprising the steps of: employing encoding means to perform a first encoding operation to encode a basic series of data items into a first series of encoded data items ; employing said encoding means to perform a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items ; generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations ; and employing first and second decoding means corresponding respectively to said first and second encoding operations to perform an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said
<Desc/Clms Page number 12>
predetermined final state following the second encoding operation.
According to a third aspect of the present invention there is provided data encoding apparatus comprising: encoding means for performing a first encoding operation to encode a basic series of data items into a first series of encoded data items and for performing a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items; and terminating means for generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations.
According to a fourth aspect of the present invention there is provided a data encoding method for producing a code comprising the steps of: employing encoding means to perform a first encoding operation to encode a basic series of data items into a first series of encoded data items for inclusion in said code; employing said encoding means to perform a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items for inclusion in said code; and generating a first and a second basic series of tail items which are encoded by'said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations, thereby producing first and second series of encoded tail items for inclusion in said code. Such a method may further comprise the step of including said basic series of data items in said code. It may also comprise the step
<Desc/Clms Page number 13>
of including said first and/or said second basic series of tail items in said code.
According to a fifth aspect of the present invention there is provided data decoding apparatus for decoding codes produced by a method embodying the fourth aspect of the present invention, said data decoding apparatus comprising a turbo decoder having first and second decoding means corresponding respectively to said first and second encoding operations for performing an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
According to a sixth aspect of the present invention there is provided a data decoding method for decoding codes produced by a method embodying the fourth aspect of the present invention, said data decoding method comprising the step of employing first and second decoding means corresponding respectively to said first and second encoding operations to perform an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
According to a seventh aspect of the present invention there is provided a code produced by a data encoding method comprising: a first portion having a first series of encoded data items produced by employing encoding means to perform a first encoding operation to encode a basic series of data items; a second portion having a second series of encoded data items produced by employing said encoding means to perform a second encoding operation to encode a
<Desc/Clms Page number 14>
reordered series of data items, being said basic series of data items arranged in a different order ; and third and fourth portions respectively having first and second series of encoded tail items produced by generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations.
Such codes could be stored on a computer-readable medium and could also be embodied in a data signal transmitted between remote stations.
Apparatus and methods embodying the present invention as described above are applicable to any form of communications network. For example, a user equipment in a mobile communications network, a DSL modem or a satellite transmission/reception station may be provided with encoding apparatus and/or decoding apparatus embodying the present invention. Likewise, a base station in a mobile communications network may comprise encoding apparatus and/or decoding apparatus embodying the present invention.
Reference will now be made, by way of example, to the accompanying drawings, in which: Fig. 1, discussed hereinbefore, is a graph illustrating the speed-distance trade-off in a communications channel ; Fig. 2, also discussed hereinbefore, is a graph showing the theoretical maximum achievable data rate plotted against the signal-to-noise ratio of the channel; Fig. 3 is a graph for use in illustrating the difference between soft and hard decoding ; Fig. 4 is a block diagram showing an example of a convolutional encoder ; Fig. 5 is a table for use in explaining the operation of the convolutional encoder of Fig. 4;
<Desc/Clms Page number 15>
Fig. 6 is a trellis diagram which shows the available state changes in the encoder of Fig. 4; Fig. 7 shows the path through the trellis diagram for the encoding of the bit sequence 101101; Fig. 8 is a trellis diagram for use in explaining the Viterbi decoding algorithm; Fig. 9 is a table corresponding to the trellis diagram of Fig. 8 which shows the paths which are chosen in the Viterbi decoding process; Fig. 10 is a block diagram showing the structure of an interleaved encoder; Fig. 11 is a block diagram showing an example of a non-systematic convolutional encoder; Fig. 12 is a block diagram showing an example of a recursive systematic convolutional encoder; Fig. 13 is a block diagram showing an example of a parallel-concatenated recursive systematic convolutional encoder; Fig. 14 is a block diagram showing the structure of a turbo decoder; Fig. 15 shows the possible paths through a trellis diagram of an unterminated encoder; Fig. 16 shows the possible paths through the trellis diagram of a terminated encoder; Fig. 17 is a block diagram showing a technique for terminating a recursive systematic convolutional encoder; Fig. 18 is a block diagram showing the structure of a single-terminated parallel-concatenated encoder; Fig. 19 is a block diagram showing a doubleterminated parallel-concatenated encoder embodying the present invention; Fig. 20 is a block diagram showing a decoder embodying the present invention; Fig. 21 is a graph comparing the performance of single and double-terminated encoding schemes;
<Desc/Clms Page number 16>
Fig. 22 is another graph comparing the performance of single and double-terminated encoding schemes ; Fig. 23 is a graph showing how the number of decoder calculations varies with the block size ; Fig. 24 is a graph showing how the performance of a code varies with signal-to-noise ratio when the number of decoder calculations is relatively constant ; Fig. 25 is a graph showing how the performance of a code is affected by the resolution used for the extrinsic data within the decoder; Fig. 26 is a block diagram showing the structure of an example pseudo-random interleaver generator ; Fig. 27 is a table showing simulation results of a single-terminated turbo encoder ; Fig. 28 is a table showing simulation results of a double-terminated encoder embodying the present invention ; Fig. 29 is a flow chart for explaining a method embodying the present invention for producing a code; and Fig. 30 is a block diagram for explaining a code embodying the present invention.
Embodiments of the present invention are closely related to the turbo code architecture. Therefore, before a detailed discussion of preferred embodiments of the present invention, the turbo code structure will first be described and specific problems associated with the turbo code structure will be highlighted.
However, an explanation of the turbo code architecture relies necessarily on a background knowledge of block codes and recursive codes upon which the turbo code is based. Therefore an explanation of coding architectures related to turbo codes is presented before the explanation of turbo codes themselves.
Error correcting codes can be classified generally according to whether they employ memory in the encoding process or not. Convolutional codes are examples of
<Desc/Clms Page number 17>
the former, while block codes are examples of the latter. Most codes are classified as linear by having the property that any two code words can be added in modulo-2 arithmetic to produce a third code word.
To have error detection and correction, redundancy must be incorporated. Error correction requires more redundancy than error detection, and, in general, the redundancy required increases as the number of bits to be corrected increases. However, there is a limit as to how many errors can be detected or corrected. This limit is based on the"Hamming distance"of the code.
The Hamming distance is defined as the minimum 'distance'between any two valid codewords. This 'distance'is defined as the number of codeword bits that are different between valid codewords. For example, a Hamming distance of 2 signifies that at least two bits must be in error for a message to be incorrectly accepted as correct. In general, a Hamming distance of e + 1 allows at least e errors to be detected. The concept of distance is a very important property, and most decoding algorithms rely on this idea.
Block codes offer a simple coding technique and, as such, do not generally offer the best performance possible. However, the more complex coding techniques to be described below are essentially similar to block codes, and so an explanation of block codes will provide a useful background. The Reed-Solomon code is a special block code that achieves the largest possible code'minimum distance for any linear code with the same encoder input and output block lengths. For ReedSolomon codes, error probability is an exponentially decreasing function of block length, whereas the decoding complexity is only proportional to a small power of the block length. The Reed-Solomon code, for this reason, is widely used in communications.
<Desc/Clms Page number 18>
Block codes transform a block of k message bits into an n-bit codeword by adding n-k redundant bits that are algebraically related to the k message bits.
The channel encoder for an (n, k) linear block code generates bits at the rate:
where R, is the information rate of the source, r= g is known as the code rate, and Ro the channel data rate.
Block codes in which the message bits are also transmitted unaltered are known as systematic codes. A systematic structure divides the codeword into two parts, the k message bits and the (n-k) parity bits.
The (n-k) parity bits are linear sums of the k message bits, where each of the (n-k) equations are linearly independent (that is, no equation in the set can be expressed as a linear combination of the remaining equations).
Consider vectors x and y representing two codewords ; the Hamming distance d (x, y) between the codewords is defined as above. The minimum distance dnun of a linear block code determines the error correcting properties of the code. As the information code words are k bits in length, therefore there are 2k code vectors, which can be transmitted with equal probability. The best approach to decoding code vectors that do not match exactly any of the 2k valid code vector is to adopt the'maximum likelihood detection'strategy. This makes the assumption that few errors are more likely than many errors. This results in the decoder selecting the code vector y which is closest to the received vector, that is, the code with the smallest Hamming distance to the errored received code word. The number of errored bits that can be corrected using this maximum likelihood detection technique is given by the following two expressions:
<Desc/Clms Page number 19>
-"""--I for an even dmin 2 min-1 for an odd dmi.
2 Convolutional codes will now be discussed, with the convolutional encoder and the associated decoding process being dealt with separately. The convolutional encoding process is a discrete-time convolution of the input sequence with the impulse response of the encoder. A convolutional encoder operates on the incoming message sequence continuously in a serial manner, and can be modelled as a finite-state machine consisting of an M-stage shift register. An L-bit message sequence produces a coded output sequence of length n (L + M) bits, where n is the number of output bits for each input bit. The code rate is given by: r=----- ? !- bits/symbol provided L > > M. n (L +M) n Fig. 4 shows one example of a convolutional encoder. The convolutional encoder 1 of Fig. 4 comprises a three-bit shift register 2 having three bit registers 21, 22 and 23, and two exclusive-OR (XOR) gates 31 and 32. The first bit register 21 receives the' sequence of bits in the input bit stream, and this is passed along the chain of registers 21 to 23 as each new bit in the input bit stream is received. The three inputs of the first XOR gate 31 are connected respectively to the three bit registers 21, 22 and 23 and the two inputs of the second XOR gate 32 are connected respectively to the bit registers 21 and 23.
For every bit in the input bit stream, two bits are output from the encoder 1, one from the first XOR gate 31 and the other from the second XOR gate 32. These bit pairs are multiplexed in alternating time slots by multiplexer 4 to produce the output bit stream. The encoder 1 shown in Fig. 4 is known as is a (2,1) convolutional encoder since there is one information
<Desc/Clms Page number 20>
bit (and one parity bit) in every two bits of code data. The encoder has a "constraint length" of K = 3, which is the number of bit registers 21 to 23.
As convolutional codes are not block codes, the convolutional encoder 1 of Fig. 4 might be expected to have a code rate of exactly since two output bits are generated for every input bit. However, the convolutional encoder 1 can be forced into a block structure using periodic truncation. At the end of each block the encoder 1 flushes the remaining bits out of the register by appending zeros to the input bit stream. This is called "termination" of the encoder.
In the Fig. 4 example, only two zeros would actually need to be appended to this code as, although it may not leave three zeros in the shift register 2, when the first bit from the next stream is input into bit 21, the two trailing zeros will be shifted into bits 22 and 23 as desired, expelling the unknown bit in register 23 from the end of the previous block. Appending zeros in this way brings the effective code rate down. However, as the number of bits before the periodic truncation increases, the code rate approaches The structure of the encoder 1 can be referred to in short-hand notation by reference to two "generators" G1 and G2. The generators G1 and G2 are numbers which define the respective input connections of the first and second XOR gates 31 and 32 to the shift register bits, and are calculated by summing the binary value of each bit to which a connection is made, starting from the bit 23 furthest from the input bit stream. Thus G1 = (1+2+4) = 7 and = (1+4) = 5. It is usual to use octal notation for these generators.
The available state changes of this encoder are shown in the table of Fig. 5. This table shows the current"state"of the encoder 1 (the contents of the first two shift registers 21 and 22) along with the possible single-bit inputs and the two-bit output which
<Desc/Clms Page number 21>
results from that input. Only the content of the first two registers 21 and 22 is required to define the state of the encoder 1 because the bit contained in the third register 23 is shifted out of that register when an input bit is shifted into register 21, The contents of registers 21 and 22 are referred to as the"state"of the encoder. The resulting state of the encoder after the incoming bit is input is also shown in the table.
From the table of Fig. 5, the output of the encoder 1 for given inputs can be calculated. For example, if the starting state of the encoder 1 is"00" and if the input bit sequence 101 is presented to the convolutional encoder 1 of Fig. 4, the following output is obtained: 11 10 00 10 11. These are pairs of outputs from G1 and G2 respectively. In this example, two extra zeros have been input to flush the register and ensure that the full code is used.
This coding scheme can be easily modelled, since the code can be generated by the addition, modulo 2, of the six-bit binary codewords, corresponding to an input bit of either 1 or 0, shifted by two bits each time.
In the case shown above, the two 6-bit codewords are 11 10 11 for an input bit 1, and 00 00 00 for an input bit 0. So, to encode an input of 101 this gives 11 (10+00) (11+00+11) (00+10) 11, which is the same result as above, namely 11 10 00 10 11.
The information in the table of Fig. 5 can also be represented diagrammatically, in what is called a "trellis diagram". The trellis diagram also provides a useful way of viewing the decoding process which is described below.
The trellis diagram for this example encoder 1 is shown in Fig. 6. All possible states of the encoder 1 are shown to the left of the trellis diagram, and all possible transitions from one state to another are illustrated by the solid-line paths. Above each path
<Desc/Clms Page number 22>
is shown the two-bit output produced by the state transition associated with that path.
The process of encoding an example input of 101101 is illustrated in Fig. 7 by the path which is taken through the trellis diagram, taking into account that two zeros must be appended to this input. The path will always start and finish in the state of"00". At each stage if the input is a 1 then the lower path is taken, and if it is a 0 then the upper path is taken.
Above each path in Fig. 7 is shown the output produced by that state change. Therefore, an input of effectively 10110100 encodes to 11 10 00 01 01 00 10 11 as shown in Fig. 7.
The algorithm to implement the convolutional encoder is relatively straightforward. It was mentioned above that bits could be encoded by the addition modulo 2 of their"standard sequence". For the encoder 1 of Fig. 4, the standard sequence for a 1 is 11 10 11, and for a 0 is 00 00 00. An array of 2K elements (a (i)) is needed for the standard sequences, where K is the constraint length; in this case the array has 6 elements. Also, an array of 2K - 2 (or 2M as M = K-1) elements (b (i) ) is needed for the shift register'memory' ; in this case the array has 4 elements. The latter array must be initialised to zero. The iterative process is then as follows: * initialise a (2K) to the standard sequence for the input bit and b (2M) to zero, * for i = 1 to 2M
* calculate a (i) = a (i) + b (i) mod 2 * for i = 1 to 2M * calculate b (i) = a (i + 2) * the output is given by a (1) and a (2) On the final iteration, the remaining a (i) must be sent. A straightforward way to program addition modulo 2 is (je+y) mod2=abs (x-y) =lx-yl.
<Desc/Clms Page number 23>
A convolutional code may be decoded by applying the principle of maximum likelihood decoding to minimum distance decoding. The principle is that the fewest errors are the most likely. The path through the trellis diagram with the minimum distance from the received bits can then be taken as the decoded data.
This is implemented by choosing the path whose coded sequence differs from the received sequence in the fewest number of places. In most cases the'brute force'comparison of all possible paths is not practical due to the number of possible paths.
Therefore, the path through the trellis is chosen during the actual process of decoding. There is a choice of suboptimal decoders, which approximate maximum likelihood decoding, or optimal ones like the Viterbi algorithm.
The Viterbi decoding algorithm will now be briefly discussed since it is a commonly-used algorithm and, unlike other suboptimal algorithms, it does not only work with special applications. This technique involves following through all possible paths in a trellis. When paths converge to a common state, paths are rejected that would imply the larger distance from the received sequence. The remaining paths are known as the survivor paths, as is explained below.
There are several problems with this type of decoding. Firstly, Viterbi decoding produces a delay in the receiver, as decisions are made after many bits are received. Secondly, as the constraint length increases, so does the complexity. Many paths are possible with convolutional codes, and as the constraint length increases, the number of paths increases very rapidly due to the increase in the number of states, and each of these paths has to be investigated. Among other problems is the fact the decoder needs to know how long to keep on trying when
<Desc/Clms Page number 24>
the survivors do not agree on the decoding of earlier data.
Consider again the example used above in relation to the Fig. 4 encoder in which a data input sequence of 101101 (with two zeros appended) gives a codeword of 11 10 00 01 01 00 10 11. Now assume that there was a transmission error in the ninth bit, so that the received codeword is 11 10 00 01 11 00 10 11. Fig. 8 shows the possible paths through the trellis diagram.
For each of the first four received bit pairs, the output associated with one of the two possible paths from the present state to the next state agrees with the received bit pair, so that path is chosen. Fig. 8 shows the plurality of possible paths after receiving the bit in error. The selection of paths continues throughout the decoding.
After receiving the bit in error, the decoding continues as follows. Firstly, the distances d. min of the convergent paths A to H of Fig. 8 are calculated.
These distances are shown in the table of Fig. 9 together with the sequence of three bit pairs defining the corresponding path. The survivor paths are then chosen, which are shown by ticks in the table of Fig.
9. There is one survivor path chosen for each destination state, that path being the path having the smaller distance dmin associated with it. Fig. 9 also shows the next round of decision-making, and clearly shows the chosen path, which is the path having the shortest combination of paths. In this example the chosen path is G and D'. The received bit sequence has therefore been decoded correctly as 11 10 00 01 01 00 10 11.
Implementation of the Viterbi decoder is more complex than for the encoder. In the case of the aforementioned convolutional encoder, it is required to keep track of 4 paths. In the general case, it is required to keep track of 2M paths. The algorithm
<Desc/Clms Page number 25>
works out the minimum distance for the 2K possible paths, then chooses the 2M converging paths with the shortest distance. At the end of the sequence of input bits, the path with the shortest distance is chosen as the decoded sequence. The path memory required is given by the equation: u = h2K-1 2M where h is the length of the information bit path history per state. It has been demonstrated that a value of h of 4 or 5 times the code constraint length is sufficient for near-optimum decoder performance.
The technique known as interleaving can also be introduced to combat burst errors on the communications channel. There are two ways of interleaving in channel coding. Both have very similar effects, but they are achieved in different ways. The basic form of interleaving uses several encoded blocks of data, and interleaves the bits from different blocks as will be described below. For example, consider a (7,4) Hamming code, i. e. one having a block size of 7 consisting of 4 information bits and 3 parity bits. Since the Hamming code has a minimum distance dmin = 3, independent of the number of parity bits, this code can detect two errors and correct one. Therefore, however long a burst error is, the Hamming code will not be able to correct it.
Indeed, it will probably decode the data wrongly and may not even discover an error. For a block code, the number of bits in error that can be corrected by using the maximum likelihood detection technique is given by demi-1 or , depending on whether dmin is odd or even.
Consider the case where seven blocks of seven bits are sent, encoded with a Hamming code. Now assume that a burst error of seven bits in length is encountered, and that it causes the whole of the third block to be in error. Due to the properties of the Hamming code, the entire block of data will be lost. Worse still, if the burst error spans two blocks both blocks would be
<Desc/Clms Page number 26>
lost, assuming both blocks had a minimum of two errors. However, if an interleaving scheme had been employed no data would have been lost at all.
Consider the case where all seven blocks of data are encoded in the same way as before, but are transmitted in a different order. Seven blocks of seven bits are still transmitted, but the respective first bits from each block are sent first. The second block transmitted is then made up from the respective second bits from the encoded blocks, and so on. If the entire third block is now lost to a burst error, when the blocks are reconstructed again only one error in each block is encountered (the third bit), which the Hamming code is capable of correcting. Therefore, as long as the burst is at most seven bits in length, all burst errors can be corrected. The scheme works as follows:
Encoded data : dll, dl2, ls. d. l. ds. dig. dl ? d21, d22, d23, d24, d2s, d26, d27, d31, d32, d,...
Transmission : dll, d21, d31, d41, dSl, d61, d71 dl2, d22, d32, d42, d52, d62, d72, dl3, d23, d33,...
With the second type of interleaving, the transmitted bits from different blocks are not interleaved, but instead the original data is interleaved before sending to the encoder. Two identical encoders are used, both working on one block of data bits, but in a different order. The interleaver permutes the data bits sent to the second encoder. In this way, both constituent encoders are working on the same bits, but in a different order.
This has the dual benefits of not only permuting the data for better burst error performance, but also giving a different "perspective" on the data.
The structure of such a parallel-concatenated encoder is shown in Fig. 10. The parallel-concatenated encoder 19 of Fig. 10 comprises a first encoding unit 5, a second encoding unit 7, an interleaving unit 6 and
<Desc/Clms Page number 27>
a multiplexer 18. The input data stream that is encoded by the first encoding unit 5 is also sent via the interleaving unit 6 to the second encoding unit 7.
The sets of output bits from the two encoding units 5 and 7 are sent one after the other. The outputs from the first encoding unit 5 are sent first, then those from the second encoding unit 7. This does reduce the code rate, but it is possible to"puncture"the code, i. e. send only a fraction of the encoded bits (usually half), thereby maintaining a better code rate.
The corresponding decoder has two identical decoding units, one for each encoding unit. The output from the second decoding unit must also be deinterleaved, so that the output is again in the same order as the original data input to the encoder. There are several decoding algorithms for decoding this type of system. The different perspective of the data given by this type of interleaving can make these decoding algorithms more reliable when used in such a system.
As mentioned above, turbo codes have been shown to have performance within 1 dB of the theoretical channel capacity defined by Shannon as described above. In general, the closer to the Shannon Limit a code performs the more complex the decoder is. This decoder complexity becomes prohibitively great, such that suboptimal codes and decoding have to be used.
Traditionally, trellis codes have reduced the decoder complexity by means of the Viterbi algorithm as described above in which the various options to be correlated with the received signal are continually trimmed as decoding progresses. However, with turbo codes, no trimming is done, as the received data is decoded as one block, increasing the complexity.
Conversely, as described in more detail below, turbo codes use an iterative decoding process and this iterative combining of weak constituent codes (a weak code is e. g. one with a small effective free distance,
<Desc/Clms Page number 28>
or unreliable performance) gives rise to a strong received code with a reduction in the overall complexity. This has the dual benefits of reducing both the scope for errors and the decoder complexity.
Turbo codes are parallel-concatenated, recursive systematic convolutional (RSC) codes. The term "parallel-concatenated" refers to the parallel concatenation of constituent encoders as described above with reference to Fig. 10. It has been shown that RSC codes can perform better than the best nonsystematic convolutional (NSC) codes at any signal-tonoise ratio (SNR). It has also been shown that the bit error rate (BER) of these RSC codes is lower than that of a classical systematic code with the same memory at high SNR. It can be shown mathematically that recursion is important to the function and performance of this parallel concatenated structure. These turbo codes, therefore, can provide performance improvements over more conventional coding schemes.
An RSC code is obtained by applying a feedback loop to a NSC code (hence the term "recursive"), and setting one of the outputs to be the same as the input bit sequence (hence the term"systematic"). This can be illustrated by reference to Fig. 11, which shows a NSC encoder 8, and Fig. 12, which shows an equivalent RSC encoder 11 employing feedback.
The NSC encoder 8 of Fig. 11 comprises four memory elements 91 to 94 connected in series as a shift register, and two XOR gates 101 and 102 connected selectively to the memory elements 91 to 94 and to the input with generators G1 = 370 and G2 = 21o respectively. An explanation of the generator notation was provided above with reference to Fig. 4; an input to the first XOR gate 101 direct from the input contributes an amount 24 to the generator G1. The subscript"o"denotes the use of octal notation; in binary notation the generators are lllllb and 10001b
<Desc/Clms Page number 29>
respectively while in decimal they are 31d and 17d. The input data stream consists of a series of bits dk, while the encoded output consists of a first series of bits Xk, taken from the output of the first XOR gate 101, and a second series of bits yk, taken from the output of the second XOR gate 102.
The RSC encoder 11 of Fig. 12 also comprises four
memory elements 121 to 124 connected in series as a shift register, and two XOR gates 131 and 132 connected selectively to the memory elements 121 to 124 and to the input with generators Cl = 370 and G2 = 210 respectively.
However, in the RSC encoder 11 the first XOR gate 131 is connected between the input data stream and the first memory element 121, and also receives a feedback connection from each memory element identified by the first generator G1. The input data stream consists of a series of bits dk, while the encoded output consists
of a first (systematic) series of bits Xk, which is the same as the input series of bits dk, and a second series of bits yk, taken from the output of the second XOR gate 132. Both encoders 8 and 11 have memory M = 4.
Considering the RSC encoder 11 of Fig. 12, if the initial state of the memories is DODO, then it can be shown that an input data sequence dk of 1001 would produce the following memory states in memory elements
121 to 124 respectively after each input bit has been presented : 1000, 1100, 0110, 1011. The first series of output bits Xk is the same is the input series dk, i. e.
1001, and the second series of output bits Yk is 110i.
The turbo encoder uses the parallel-concatenated structure shown in Fig. 10, with both encoding units 5 and 7 of Fig. 10 being an RSC encoder as described above with reference to Fig. 12. An example turbo encoder 20 is shown in Fig. 13, comprising two encoding units 15 and 17 with memory M = 4, each encoder being a (370, 210) RSC as shown in Fig. 12. Thus the first
<Desc/Clms Page number 30>
encoding unit 15 comprises four memory elements 1121 to 1124 and two XOR gates 1131 and 1132 connected in the same way as correspondingly-labelled elements described above with reference to Fig. 12. The second encoding unit 17 comprises four memory elements 2121 to 2124 and two XOR gates 2131 and 2132 connected in the same way as corresponding-labelled elements of Fig. 12. Two outputs are taken: the first (systematic) output data stream Xk is the same as the input bit sequence dk, and the second output data stream yk is either a bit from the first encoding unit 15 or from the second encoding unit 17, controlled by multiplexer 18.
As described above with reference to Fig. 10, an important part of this encoder 20 is the interleaving unit 16. The interleaving unit 16 permutes the block of input bits dk before it is passed to the second encoding unit 17. In this way, although both of the encoding units 15 and 17 are working on the same block of bits dk, they are in a different order. It is therefore likely that when one encoding unit produces a low-weight codeword, the other will produce a highweight codeword. This combination of weak codes can, therefore, produce a powerful combined code.
It has been suggested that these codes have many features in common with'random'block codes, which are known to approach Shannon-limit performance as the block size increases. However, having a large block size is not possible in many practical applications due to the complexity of the decoding algorithm. Turbo codes have similar performance to these'random'codes, but they do not have the decoding complexity, by employing iterative algorithms.
There will now be presented a more detailed discussion of the implementation of turbo codes, including the algorithms used to generate and decode turbo codes and how to generate a pseudo-random interleaver.
<Desc/Clms Page number 31>
The equations governing these codes are based on the convolutional encoder algorithm described by Berrou, Glavieux and Thitimajshima in their paper entitled'Near Shannon Limit Error-correcting Coding and Decoding: Turbo Codes (1) ', IEEE Int. Conf. On Comm. , May 1993. This paper describes a binary rate R = convolutional encoder with constraint length K and memory M = K-1. The rate is calculated from the number of information bits transmitted divided by the total number of bits transmitted. The input to this encoder at time k is then the data bit dk, and the corresponding codeword Ck is the binary couple (Xk, Yk) where
K-l Xk =X-i modulo 2 gl, = 0, 1 (2) ; : 0 K-l y =,dk-, modulo 2 g21 = 011 (3) ; : 0
where G1 : (go,) and G2 : {,} are the two encoder generators.
In the case of the RSC code, however, it is necessary to take the feedback loop into account. If the encoder is as shown in Fig. 12, then X is output as the input data and is also the feedback feeding the Y output. In this case, the following equations are obtained, which are modified from (2) and (3) above:
Xk = dk t= K-1 y =Ek-, mod. 2 1=0 where K-i ak =dk +Eiak-i mod. 2 1=1
One possible implementation of the pseudo-random interleaver is discussed in a paper by Benedetto and Divsalar entitled'Turbo Codes : Performance Analysis, th Design and Iterative Decoding', Globecom'97, Nov. 4th, Phoenix. An interleaver with length N = 2m-l can be
<Desc/Clms Page number 32>
produced by using a shift-register with feedback connections made according to a primitive polynomial of degree m. This is then loaded with a non-zero codeword, and cycled through all 2m-1 different binary words. The resultant order can then be used to permute blocks of data bits.
For example, if a polynomial D+D2+1 is used, the structure of the pseudo-random generator would be as shown in Fig. 26. With the initial codeword 101 loaded into such a pseudo-random interleaver, the following sequence is obtained: 101,010, 001, 100, 110,111 & 011. In this way the permutation [1234567] -7 [5214673] is obtained.
The turbo decoder takes the form as shown in Fig.
14, comprising first and second decoding units 25 and 27 which are serially concatenated, an interleaving unit 26, a demultiplexer 28, and first and second deinterleavering units 29 and 30. The input of the decoder 24 is the received binary couple (Xk, Yk). The Yk input data stream (parity data) is the combination of Ylk and Yzc from the turbo encoder 20 discussed above with reference to Fig. 13. The demultiplexer 28 switches the incoming Yk data to the first decoding unit 25 or second decoding unit 27 depending on the constituent encoding unit (15 or 17 of Fig. 13) concerned. When the input is switched to one decoding unit, the input to the other is set to zero. The Xk input data stream (systematic data) is fed to the first decoding unit 25 and to the second decoding unit 27 via the interleaving unit 26. The first decoding unit 25 outputs extrinsic data Zlk (to be explained below) which is interleaved by interleaving unit 26 and input to the
second decoding unit 27 in a permuted order Zlp [k].
Likewise, the second decoding unit 27 outputs extrinsic data Z2P[kJ which is de-interleaved by the first de- interleaving unit 29 and input to the first decoding unit 25 in a permuted order Z2k. The extrinsic data
<Desc/Clms Page number 33>
Z2p [k] is de-interleaved by the second de-interleaving unit 30 on the last iteration, for the final decoded output to be calculated.
The turbo decoder 24 is a soft-input/soft-output (SISO) algorithm as will now be briefly explained.
Binary signals sent over channels are not simply a series of l's and 0's, they are signal levels with thresholds. This information can be used by the decoder as a confidence interval. Fig. 3 shows the likelihood P (z) of decoding a received signal at a level Z (t) in an Additive White Gaussian Noise (AWGN) channel as a 1 or a 0. Decoding as either a 1 or a 0 is hard decoding. Soft decision decoding can be achieved by employing a "soft" input having quantization levels, effectively giving a confidence interval for each incoming bit. Fig. 3 shows eight levels (3 bits) of quantization.
For a Gaussian channel, eight-level quantization of the input results in a performance improvement of approximately 2 dB. Analogue, or infinite-level quantization, results in a 2.2 dB performance improvement over hard decision decoding, which is only 0.2 dB better than eight-level quantization. Greater quantization than eight levels can produce little performance gain, and will require greater processing power. Although soft decision decoding may require greater processing power, it produces a significant improvement in performance.
The turbo decoder 24 is a soft-input/soft-output (SISO) algorithm in that it not only makes use of softdecision decoding on the input, but as is described in detail below the turbo decoder 24 generates internal "soft information" relating to the likelihood of an output bit being a particular state. The turbo decoder 24 makes a final decision about the output based on this internal soft information in the form of weights.
The received systematic data are assumed to be correct
<Desc/Clms Page number 34>
in the first instance, and the decoding iterations provide a weight to that data.
The highest weight codeword becomes the output word. The soft information usually takes the form of a "log-likelihood ratio" for each data bit. The loglikelihood ratio is the ratio of the probability that a given bit is 11'to the probability that it is'0'. If the logarithm of this likelihood is taken, then its sign corresponds to the most probable hard decision on
the bit (if it is positive, Ill is most likely ; if negative, then'0'). The absolute magnitude is a measure of the certainty about this decision.
In the turbo decoder 24 there are two decoding units 25 and 27 and the output of one decoding unit provides the input to the next. A subsequent decoding unit can make use of this reliability information from a previous decoding unit. It is likely that decoding errors will result in a smaller reliability measure than correct decoding. This may enable the second decoding unit 27 to correctly decode some of the errors resulting from the incorrect first decoding operation.
If not it may reduce the likelihood ratio of some, and a subsequent reapplication of the first decoding unit 25 may correct more of the errors, and so on.
The log-likelihood ratio can be regarded as a measure of the total information about a particular bit. This information comes from several separate sources. Some comes from the received data bit itself: this is known as the "intrinsic information".
Information is also extracted by the two decoding units from the other received bits of the first and second codeword. When decoding one of these codes, the information from the other code is regarded as "extrinsic information". It is this information that needs to be passed between decoding units, since the intrinsic information is already available to the next
<Desc/Clms Page number 35>
decoding units, and to pass it on would only dilute the extrinsic information.
In the turbo decoder 24 the intrinsic information has been separated from the extrinsic, so that the output of each decoding unit contains only extrinsic information to pass on to the next decoding unit. Thus the first decoding unit outputs extrinsic data Zlk which is interleaved into the correct order for the second decoding unit 27 by the interleaving unit 26. The output of the interleaving unit 26 is labelled as Zipj in Fig. 14 where P [k] denotes the permutation (or interleaving) of the k bits. After the second code has been decoded for the first time, extrinsic information Zgp] is passed back to the first decoding unit 25, interleaved back to the appropriate order Z2k for that decoding unit by the first de-interleaving unit 29, and the whole process iterated again. It is this feedback that has given rise to the term'turbo-code', since it is likened to a turbo-charged engine, in which part of the power at the output is fed back to the input to boost the performance of the whole system.
Each decoding unit comprises a symbol-by-symbol maximum posteriori (MAP) decoder, and the decision process made by the symbol-by-symbol MAP decoder is based on the sign of the log a posteriori probability (LAPP) ratio. The decision is made as follows: Uk = +1 if P (U, = +ily) > P (U, =-ily), and Uk =-1 otherwise. For the theory and derivation of the algorithm the reader is referred to the paper of Bahl et al entitled'Optimal decoding of linear codes for minimizing symbol error rate', IEEE Transactions on Information Theory, March 1974.
Both constituent decoding units 25 and 27 of Fig.
14 use the modified BCJR algorithm (named after its creators Bahl, Cocke, Jeinek & Raviv) proposed by Berrou et al (see above) to perform symbol-by-symbol maximum a posteriori (MAP) decoding. Pseudo-code for
<Desc/Clms Page number 36>
the turbo decoder has been given by Ryan in the paper entitled'A Turbo Code Tutorial', New Mexico State University, Las Cruces, and is shown here. The relevant equations of the BCJR algorithm are also added, as and when required. Each constituent decoding unit 25 and 27 must have full knowledge of the trellis of the corresponding encoding units 15 and 17. Input bits and parity bits for all possible state transitions must be known, and can be stored in an array or matrix, for instance. Also, the interleaving unit 26 and deinterleaving units 29,30 must be matched in the encoder and the decoder.
The iterative process can now be described. Turbo decoding using the MAP algorithm essentially works by assigning a weight to each possible state-transition depending on the probability of it happening, given all the previous bits and all the subsequent bits. To achieve this, the decoder performs both a forwards and a backwards pass over the data block.
Firstly, the decoder calculates the r values.
These assign a weight factor to the possible state transitions (those that cannot happen are assigned a weight of 0). The transition with the higher y value is considered to be the more likely. Decoding then continues with the forward pass over the data from bit 1 to N, giving the a values, which is equivalent to asking what is the likelihood of the state transition happening given all the previous received bits.
Finally, a backwards pass is performed over the data from bit N to 1, giving the 8's. Here, it is equivalent to asking what is the likelihood of this state transition given all the subsequent received bits. Then calculation of the Log Likelihood Ratio (LLR) for the entire sequence can proceed from these three variables. The LLR is then a measure of confidence about the value of the original bit. This
<Desc/Clms Page number 37>
is the extrinsic data that will be passed to the next decoding unit.
The initial state of the encoding unit is known and can, therefore, initialise the forward pass (i. e.
the a values) accordingly. The initialisation is as follows :
a- (') (s) = 1 for s = 0 0 = 0 for s &num; 0
where the superscript (i) denotes the encoding unit number, the subscript the bit number and s is the state of the encoding unit. However, there is no such initialisation for the values if the final state of the encoding unit is not known. Therefore, the backwards pass is initialised based on the results of the forward pass. This is initialised as follows:
That is to say, set the ? value of the final bit in the block to be the same as the a value for all states s.
For the case where there are two constituent decoding units 25 and 27, the initialisation of the decoding units is as follows, starting with the first
decoding unit 25 :
(s) = 1 for s =0 ao = 0 for s ; ; j :. 0 () = 1 for s =0 = 0 for s &num; 0 ) =Ofbr=l, 2,..., N
where L (Uk) is the LAPP ratio. Then the initial state of the second decoding unit 27 is set :
a (s) = 1 for s = 0 0 =0forts$0 2) (S) = a) (s), Vs this is set in the first iteration
2 (Uk) is determined after the first half-iteration from the first decoding unit 25.
<Desc/Clms Page number 38>
The iterative process will now be discussed together with the corresponding equations of the BJCR algorithm. The following explanation concerns the nth iteration. Again, the two decoding units 25 and 27 are considered separately, starting with the first decoding unit 25 :
for k = 1, 2,..., N get \" 1P) where y'is the received . Yk = (Yk'Yk, (perturbed) source (systematic) bit, and y'P is the k received (perturbed) parity bit from first encoding unit 25 ) + IE Y,)]. eXP [2E, YkpXp] 'compute () =exp [M, (, (Mi, ) + )]. exp [ ], V
state transitions allowed, where Uk is set to the value of the encoding unit input that caused the transition s'-- > s ; ; (, ) is the de-permuted extrinsic information from the previous second decoding unit 27 iteration, and Ec is the energy per channel bit. Here XkP is the non-perturbed output of the first encoding unit 15 and not the systematic data.
la (l) '" "E. E. )-)' E E t ak) 1 (5) Yk (5 a 5) for k = N, N -1,..., 2 1) E,, flk (') (S)-7k (S"S) . = Es k (S) Yk (s, s) I., ES, k-I fbr=l, 2,..., N for k = 1, 2,..., N for cc. pute) =log (6) -/ sThe iterative process for the second decoding unit 27 compute LX2 (uk) =log E ak-l (5) rk (s, S) k) (5) (6) The iterative process for the second decoding unit 27 is very similar : for k = 1, 2,..., N y2p) 'get Yk = (], )
<Desc/Clms Page number 39>
(') - [ ;J (Le () 4Ee') [ ;2Ee P P ] ; \-I * compute (s') =exp [} Uk ( (M) +k]. exp) ], V . No P [
state transitions allowed, where Uk is set to the value of the encoder input that caused the transition s'- > s and (M) is the permuted
extrinsic information from the previous first decoding 25 iteration
'compute of). M and- (Uk) from (4), (5) & (6) compute s) and L21 (Uk) Finally, after the final iteration, the decoded bits are computed. This is done using the following iteration : for k ==1, 2,,.., N . compute (Uk) =-+ (M) +Z) + L21 (UPifkl) + 92 (Uk * if Z, (M) > 0, decide = +1 else decider*=-1
The termination of convolutional encoders, described above with reference to Fig. 4, is achieved by appending an appropriate number of zeros to the end of the input data stream to flush out the contents of the encoder and return it to a known state. It is also desirable to terminate the constituent encoding units of a turbo encoder. The constituent decoding units use maximum likelihood decoding of the received code words.
As such, it is desirable to know both the initial state and the final state of the constituent encoding units, as this gives more information to work from and, therefore, a more reliable decision. The easiest way to know the state of the encoding unit is to force it to the zero-state by terminating it correctly. In this way, all blocks will start and end in the same state.
If the final state is not known, then the situation illustrated in Fig. 15 arises. The trellis diagram of Fig. 15 is for a constituent encoding unit having four possible states. The initial state is known, so the decoding unit can be initialised such that the weights assigned to the first bit will show
<Desc/Clms Page number 40>
only two possible paths. The weights for the other paths will be zero. Unfortunately, as the final state is not known, it is not possible to influence the decoding unit weights towards the end of the trellis.
In this situation the backward pass over the data block that is performed by the MAP decoding units cannot be initialised, and some information about the data is being discarded. In an RSC code, each bit encoded will have an influence over all subsequent bits, to a greater or lesser extent. By not terminating the trellis some of this information is lost.
Terminating the trellis should result in better performance. The problem with terminating an RSC code is that a series of zeros cannot simply be added to the end of the data, as would be done to terminate nonrecursive convolutional codes such as that of Fig. 4.
The recursion means that this is unlikely to drive the encoder to the zero-state. Instead, a series of bits that is dependent on the current state of the encoding unit must be appended.
After termination the backward pass over the data block in the decoding units can be initialised, as the final state is now known. Again this gives rise to some zero weights for particular state transitions.
Fig. 16 shows the terminated trellis. This can significantly reduce the computational complexity in the decoder since many weights are known to be zero.
However, turbo codes commonly use pseudo-random interleaving units. Therefore, if a series of bits is added on to the end of the data to drive the encoder in the first encoding unit to the zero-state it is unlikely to drive the encoder in the second encoding unit to the zero-state as well. This is why turbo codes usually only deal with the termination of the first encoding unit. In this case the initial state and the final state of the first encoding unit are
<Desc/Clms Page number 41>
known, but neither is known for the second encoding unit.
Therefore with turbo codes such as those described above with reference to Fig. 13, due to the nature of the more common interleaving unit designs, it is usually only the trellis of the first encoding unit that is terminated. However, performance improvements can be obtained by terminating both encoding units.
The usual way of terminating both encoding units involves utilising specially designed interleaving units that do not perform as well as others at low SNR.
Therefore, this method is reserved for high SNR situations, where the performance gain from terminating both tails is less significant anyway.
As has already been stated, terminating the trellis of an RSC encoder cannot be achieved simply by appending a series of zeros to the incoming data. For an RSC encoder of constraint length m, a sequence of (m - 1) bits is required to be appended that is dependent on the current state. The penalties incurred for the termination are relatively small. For example, encoding a block of 1020 bits with a constraint length 5 RSC encoder will mean that 1024 coded triplets have to be sent, increasing the number of bits by 12 on 3060, for the non-punctured rate Y3 code.
A simple method of generating the tail for the Fig. 12 RSC encoder 11 is shown in Fig. 17. In this scenario, the connection 21 shown with the dashed line in Fig. 17 is not made whilst encoding the input data dk. To generate the tail, this connection 21 is made.
In this case there are five inputs to the exclusive-OR (XOR) gate 13 that control the input and recursion. If the fifth input is set to be the same as the XOR of the other four, the output will always be zero. Completing this operation four times, in this case, will leave the encoder in the zero-state.
<Desc/Clms Page number 42>
Considering the encoder 20 of Fig. 13, having completed termination for the first encoding 15, the data sequence dk has the extra (m-1) bits appended.
This is then interleaved and encoded with the second encoding unit 17. However, it is unlikely that the second encoding unit 17 will be properly terminated.
The problem then arises that the second encoding unit 17 is not in the zero-state for the encoding of the next block. This can become a problem at the decoder, so the second encoding unit 17 is always set back to the zero-state after encoding a block, regardless of its actual final state.
The turbo decoder 24 of Fig. 14 knows the initial state of both constituent encoding units 15 and 17 to be the zero-state, as well as the final state of the first encoding unit 15. It does not, however, know anything of the final state of the second encoding unit 17. The backward pass over the data for the second decoding unit 27 is initialised based on the final state of the forward pass, which is only a'best guess'.
There are interleaver designs that enable termination of both encoders. One such design is the Block Helical Simile interleaver, which is based on the conventional block interleaver. Bits are arranged into a matrix and then read out diagonally rather than by column.
Such an interleaver takes the form of a matrix, where the number of columns is a multiple of m (the constraint length of th code). The number of rows is determined by the size of the block, k. The number of rows and columns must be relatively prime. For example, a code with constraint length m = 3 and a block size of k = 15, gives rise to the following matrix:
<Desc/Clms Page number 43>
d, d2 d3 d4 d6 d7 d9 do dll dl2 . 3 4 U.
The interleaved sequence given by this matrix would then be :
d = (d,, , , d, d, d0, < , dd, d, d, , , d3)
This can be made into an odd-even interleaver for puncturing by choosing the number of columns to be even. However, although this type of interleaver would seem better than other interleaver designs as it allows termination of both encoders, it does not actually perform as well as some pseudo-random interleavers.
Better performance would be achieved if both encoders could be terminated while using a code-matched pseudorandom interleaver.
The scheme described above of initialising the values based on the results of the forwards pass is clearly less reliable than knowing where to start the backwards pass, as this is only a'best guess'as to the end state of the encoding unit. It is quite possible that errors could enter the system at this point and be propagated. If the trellis was to be terminated then it would be possible to initialise the backward pass accordingly.
Fig. 18 shows the structure of such a singleterminated encoder. The encoder 23 of Fig. 15 is similar to the encoder 19 of Fig. 10, and in addition to the like-numbered parts of Fig. 10 the encoder of Fig. 18 comprises a termination unit 22 connected to receive the incoming data stream (d1... dk) having k bits.
The termination unit 22 acts on the data (d1... dk) to produce a tail (t1... tm-i) having m-1 bits which is suitable to terminate the first encoding unit 5. This tail (t1... tm-i) is appended to the input data (d1... dk) and
<Desc/Clms Page number 44>
sent to the first encoding unit 5 and to the interleaving unit 6, as well as to the systematic output of the encoder 23.
The first encoding unit 5 acts directly on the terminated data stream (d1... dk) + (tl... tm-l) to produce an output (e\... e\+m-i) having k+m-1 bits, and is left properly terminated following the encoding procedure.
The second encoding unit 7 acts on the same bits as the first encoding unit 5 but after passing through the interleaving unit 6 which modifies the order of the bits to produce interleaved data (ii... ik+m-l) having k+m-1 bits. The second encoding unit 7 produces an output
(ei... e\+m-i) but due to the action of the interleaving unit 26 the tail bits (ti... tm-i) are no longer in the correct order or place to terminate the second encoding unit 7 properly.
Termination of both encoding units is therefore desired, whilst preferably being able to keep a pseudorandom interleaving unit. Therefore, two tails must be appended, one for each encoding unit. This presents a problem. Both decoding units work on the same bits, but in a different order, and then pass extrinsic information to the other. This extrinsic information is a measure of confidence for each bit in the data.
In this situation there is no point in passing the extrinsic information about the tails as they are not the same. Also, a problem of how to deal with the interleaver/deinterleaver arises.
To solve this problem, it is necessary to interleave the data before appending the tails in the encoder. The tails for each encoding unit are then calculated and appended afterwards. Therefore, there will then be k + (m - 1) coded triplets in the block, where k is the interleaver size.
Fig. 19 is a block diagram showing an encoder 40 embodying the present invention. The encoder 40 comprises first and second encoding units 50 and 70,
<Desc/Clms Page number 45>
first and second termination units 80 and 90 and an interleaving unit 60. The first termination unit 80 is connected to receive an input data stream (d1... dk) having k bits. The first termination unit 80 acts on the data (d1... dk) to produce a first tail (t\... t1m-l) which is suitable to terminate the first encoding unit 50. This first tail is appended to the input data stream and sent to the first encoding unit 50 as well as the systematic output of the encoder 40. The encoder 40 also comprises initialising means (not shown) for initialising the first and second encoding units 50 and 70 to a predetermined starting state (such as the zero state) before performing an encoding operation.
The un-terminated input data (d1... dk) is also sent to the interleaving unit 60, which interleaves the k data bits to form the interleaved data (il... ik). The second termination unit 90 acts on this interleaved data to produce a second tail (ti... t-i) which is designed to terminate the second encoding unit 70 after being fed the interleaved data (i1... ik). This second tail is appended to the interleaved data and sent to the second encoding unit 70.
The first encoding unit 50 performs a first encoding operation to encode the un-terminated input data (d1... dk) to produce an output (e\... elk) and
subsequently encodes the first tail (t\... t1m-l) to produce an output (etc... em-l). The un-terminated input data can be referred to as a systematic series of data items (bits), or as a basic series of data items (bits). The second encoding unit 70 performs a second encoding operation to encode the interleaved data (ii... ik) to produce an output (e21... e2k) and subsequently the second tail (t21... tm-i) to produce an output
(t2 t2) (e"i... ei).
The structure of the double-terminated turbo code is that the data block of k bits is encoded as normal, but without terminating either code. The first
<Desc/Clms Page number 46>
encoding unit is then terminated, with the required tail bits being appended to the systematic data, and the corresponding encoded bits to the encoded data.
The systematic data is the same as the non-coded initial data. Next, the second encoding unit is terminated and the encoded bits appended to the encoded data from the first encoding unit. No systematic data is sent for the tail of the second encoder, i. e. the non-coded tail bits are not sent, only the encoded tail bits.
The difference between the encoder 23 of Fig. 18 and the encoder embodying the present invention of Fig.
19 is that in the latter the initial data is interleaved before the tail for the first encoding unit 50 is appended. Each encoding unit has its own termination unit which produces a tail designed to force that encoding unit to the zero-state. The tail for the first encoding unit 50 is appended to the systematic data; the tail for the second encoding unit 70 is not.
With the single-terminated encoder 23 of Fig. 18, a single tail (tri... tm-i) is generated and sent in three different perspectives: firstly at the end of the systematic data; secondly encoded by the first encoding unit 5 and located at the end of that encoded data; and thirdly encoding by the second encoding unit 7 and interleaved throughout that encoded data. If errors occur towards the end of the transmitted block (in the tail region), these errors can potentially be corrected due to the interleaving process as described above.
With the double-terminated encoder 40 of Fig. 19 there are (m-1) encoded bits from the second encoding unit 70 (which form the encoded tail for this encoding unit 70) that have no corresponding systematic data.
Similarly, they are not interleaved. Both tails are therefore encoded and sent at the end of the block without interleaving.
<Desc/Clms Page number 47>
This might at first sight appear to be a drawback because it would seem that if errors occur at the end of the block then the tail information would be badly affected. However, this drawback is more than compensated for by the fact that the error-correcting capabilities of the code produced by the using termination in both encoding units 50 and 70 is much enhanced towards this end of the code. As illustrated in Fig. 16, because it is known that the second encoding unit 70 must end in a particular state, many of the weights associated with the state transitions towards the end of the code can be set to zero in the corresponding second decoding unit (to be described below). For example, if an error occurs in the final triplet in Fig. 16, it will be compensated for by the fact that non-zero weights will be assigned only to the two possible paths that cause the encoder to end in the zero-state. All other paths will have a weight of zero assigned to them. Similarly, zero weights can be set for many state-transitions in the whole tail, combating many errors that may occur. The tail is a small proportion of the whole block. So the rest of the block, as well as the known final state, will counteract errors occurring in the tail.
Although the double-terminated encoder 40 of Fig.
19 is described as having two termination units 80 and 90, and two encoding units 50 and 70, this is not essential. For example, a single termination unit and encoding unit could instead be provided. In this case the termination and encoding of the systematic and interleaved data blocks would be performed one after the other rather than simultaneously and either transmitted one after the other or stored for example in a buffer before transmission in an interleaved fashion as described above.
For a double-terminated turbo code as described above to work with a pseudo-random interleaver,
<Desc/Clms Page number 48>
modifications have to be made to the standard turbo decoder of Fig. 14 as well as to the encoder as described above (the encoder is changed in that two different tails are generated, and the second tail is appended after interleaving).
Fig. 20 is a block diagram showing a decoder 240 embodying the present invention. The decoder 240 comprises first and second decoding units 250 and 270, first and second tail stripping units 310 and 320, an interleaving unit 260, a de-interleaving unit 290, a decision unit 300 and a demultiplexer 280. The input of the decoder 240 is the received binary couple (Xk, Yk). The Yk input data stream (parity data) is the combination of Ylk and Y2k from the encoder 40 discussed above with reference to Fig. 19. The demultiplexer 280 switches the incoming Yk data to the first decoding unit 250 or second decoding unit 270 depending on the constituent encoding unit (50 or 70 of Fig. 19) concerned. When the input is switched to one decoding unit, the input to the other is set to zero. The Xk input data stream (systematic data) is fed to the first decoding unit 250 and to the second decoding unit 270 via the interleaving unit 260.
The first decoding unit 250 produces extrinsic information ZlDT relating to both the data block (d1... dk) and the tail (t\... "t1m-l) from the first termination unit 80 of the encoder 40 of Fig. 19. This is achieved in the same way the first decoding unit 25 of Fig. 14.
However, there is no reason to pass the extrinsic information relating to the first tail to the second decoding unit 270 since that decoding unit will be working on interleaved data terminated by the second termination unit 90 of Fig. 19 with a different tail (t\... tm-i). Therefore the extrinsic information in Zinr relating to the first tail ZlT is stripped from Zior by the first tail stripping unit 310 and passed back to the input of the first decoding unit 250. The
<Desc/Clms Page number 49>
extrinsic information ZlD relating to the data is forwarded to the interleaving unit 260 where the order
is permuted to produce data Zlp [D]. This permuted extrinsic data Zip] is then passed to the second decoding unit 270.
Likewise, the second decoding unit 270 produces extrinsic information Z2p [DT] (where the subscript "P [DT] " denotes an order of bits which is permuted with respect to the extrinsic information Zlcr produced by the first decoding unit 250) relating to both the data block (d1... dk) and the tail (ti.-. t-i) from the second termination unit 90 of the encoder 40 of Fig. 19. This is achieved in the same way the second decoding unit 27 of Fig. 14. However, there is no reason to pass the extrinsic information Z2pj relating to the second tail to the first decoding unit 250 since that decoding unit will be working on interleaved data terminated by the first termination unit 80 of Fig. 19 with a different tail (t\... t\-l). Therefore the extrinsic information in Z2p [DT] relating to the second tail Z2p[T] is stripped
from Z2p [DT] by the second tail stripping unit 320 and passed back to the input of the second decoding unit 270. The extrinsic information Z2p [D] relating to the data is forwarded to the de-interleaving unit 290 where the order is permuted back to the order required by the first decoding unit 250 to produce extrinsic data Z2D.
This extrinsic data Z2D is then passed back to the first decoding unit 250. It is also passed to the decision unit 300 where a maximum likelihood decision is made for each of the data bits to produce a hard output series of bits.
The decoder 240 is therefore different from the decoder 24 of Fig. 14 in the sense that extrinsic information about the tails is not passed between the first and second decoding units 250 and 270.
Therefore, the tails are not subject to the interleaving/deinterleaving process. It is also
<Desc/Clms Page number 50>
possible to initialise both the forwards and backwards passes of both the first and second decoding units 250 and 270. Therefore, the backwards pass can now be initialised as follows:
With the decoder 24 for the single-terminated encoder 20, extrinsic information is passed between the two decoding units 25 and 27 for the whole block of systematic data, including the tail from the first encoding unit 15.
With the decoder 240 for the double-terminated encoder 40 only extrinsic information for the original data block, i. e. (d1,..., dk) is passed between decoding units. The extrinsic information is, however, generated in the same way for the tail that the particular decoding unit is dealing with, but this information is held internally ready for the next iteration. Decoding of the data block uses extrinsic information from the other decoding unit, whereas decoding of the tail uses extrinsic information from the same decoding unit. As mentioned above, this does not cause a weakness in the decoding as the final state of the second encoding unit 70 is known to be zero.
Therefore, the weights for the tail can be reset at the start of every decoding iteration.
Although it is described above with reference to Figs. 15 and 16 that it is desirable for an encoding unit to be terminated to the zero state, it will be appreciated that termination to any state is sufficient so long as that state is known.
Many simulations of codes with small pseudo-random interleavers and short constraint lengths have been performed to show the gain achieved when using a double-terminated turbo code. The pseudo-random interleaver used was that defined in the paper by S.
Benedetto and D. Divsalar entitled'Turbo Codes:
<Desc/Clms Page number 51>
Performance Analysis, Design and Iterative Decoding', th Globecom'97, Nov. 4th, Phoenix.
The results show that a performance improvement is achieved when terminating both encoding units, over terminating only the first. The gain actually achieved is dependent on the code parameters. There will now be presented a series of example graphs to illustrate the performance improvements.
Fig. 21 shows the performance graph of a code with a constraint length of four and using six decoder iterations, with a generator matrix of (13, 10) o. The two sets of curves show the performance for two different interleaver sizes, namely 1024 bits and 128 bits. Although for many codes the performance at low signal-to-noise Ratios (SNR) is very similar to that of the single-terminated code, an improvement in the performance floor is obtained. Turbo codes do have a perceived performance floor, that is to say that they have an inherent bit error rate (BER) that their performance will tend towards. At this floor, increases in SNR will have little effect on the BER.
However, by using a double-terminated code, this floor can be lowered, improving the performance of the code at higher SNR.
The performance gain provided by terminating both encoders is also especially marked when using small interleavers. This property is particularly beneficial since as mentioned above it is desirable, especially but not exclusively in real-time applications, to reduce the delay caused by encoding and decoding the data. The use of small data block sizes (interleaver size) is one way of achieving this. For example, in the application of turbo codes to third generation mobile telecommunication transmissions it is desirable to keep the block size small, for example below 512 or 1024 bits in length. It has been shown that an embodiment of the present invention is particularly
<Desc/Clms Page number 52>
beneficial to the performance in such in real-time applications when using small block sizes. For non real time data applications the use of small block sizes is often less important (a block size as large as 128k bits is possible, for example, in some satellite telecommunication systems). However, a performance benefit will be evident using the encoding/decoding scheme embodying the present invention across the whole range of block sizes and applications.
Fig. 22 shows another example where the code has a constraint length of five and using two decoder iterations. The generator matrix is (33, 23) o and the interleaver sizes are 1024 bits and 128 bits. As can be seen from Fig. 22, although the code using the larger interleaver outperforms the other, the code with the smaller interleaver sees a greater performance improvement from single-to double-termination.
It is preferable that the interleaving unit sorts the bits in a manner that lacks any apparent order.
This is unlike normal interleavers, which tend to rearrange the bits in some systematic manner. with previously-considered coding schemes it is often important with that the size of the interleaver block N be selected to be quite large, preferably N > 1000.
For such schemes, one pseudo-random interleaver will in general perform about as well as any other, provided N is large. An embodiment of the present invention provides the advantage that the performance is improved for smaller block sizes.
The size of the interleaver can be reduced without having to increase the complexity of the encoder by designing good constituent codes produced by the encoding units 15 and 17. Codes can be optimised by choosing certain characteristics for the generator matrix. For an RSC code the following generator matrix is valid:
<Desc/Clms Page number 53>
For the optimised code, d (D) must be a primitive polynomial, over GF (2), of the relevant order. For example, d (D) = D4 + D + 1 is a primitive polynomial over GF (2), of order 4. n (D) is then chosen such that it yields the lowest bit error probability, based on certain minimum weight characteristics. The principle is to maximise Zmin, the lowest weight of the output bits in error events generated by information sequences of length two.
A primitive polynomial over GF (q) is irreducible over GF (q), that is to say that it cannot be factored into non-trivial polynomials. The polynomial is primitive over GF (q) if it has order q'-1. The order of a polynomial f (x), where f (O) * 0, is the smallest integer p for which f (x) divides xi + 1. The number of Primitive polynomials over GF (q) is:
where &num; (m) is Euler's totient function, which is the
number of numbers less than m and relatively prime to m.
These'well designed'codes have been shown to outperform other codes with the same number of states.
Using codes designed for their performance, it can be shown that as more memory bits are used the performance can be improved. The performance gain over encoders with fewer memory bits is dependent on the SNR.
Interleaver gain is proportional to N-1 when the interleaver length, N, is significantly larger than the memory of the constituent encoder in the encoding units. For a factor of 10 increase in the interleaver size, the bit error probability is reduced by a factor of 10.
<Desc/Clms Page number 54>
It can also be shown from modelling that at certain SNR, doubling the number of states for the constituent encoders in the encoding units also reduces the bit error probability by a factor of 10 or more (at very low SNR the bit error probability is not reduced by as much as a factor of 10, but is still significant). However, the interleaver has more effect at very low SNR than the constituent encoders.
Therefore, at high SNR, the delay inherent in the system can be reduced by reducing the interleaver size by a factor of 10 and increasing the memory of the constituent encoders by 1. In this case a loss in performance should not be suffered.
Fig. 27 is a table showing the simulated performance of a single-terminated turbo encoder for various encoding unit generators and varying numbers of decoder iterations. The results are shown separately for three different encoding unit constraint lengths: 2,3 and 4.
In Fig. 27, the first column shows the number of decoder iterations performed. The head of the second column shows the generator employed, and the computation time involved in performing the appropriate number of decoder iterations. The third column shows the average bit error rate (BER) resulting over 128 blocks of data, each block of data having a length of 1024 bits (interleaver size of 1024), and at a signalto-noise ratio of 1 dB. In the results for constraint lengths 3 and 4, the fourth/fifth, sixth/seventh and eighth/ninth columns respectively show the results for different code generators. The final column gives the average BER over all the different generators.
The results of Fig. 27 for the single-terminated turbo encoder can be compared to equivalent results shown in Fig. 28 for a double-terminated encoding scheme embodying the present invention. As can be seen, the double-terminated encoding scheme performs
<Desc/Clms Page number 55>
better than the single-terminated scheme across the whole range of encoding unit structures and numbers of decoder iterations.
It is possible to employ non-identical constituent encoding units in the encoder. In this way, it is possible to gain a sufficiently different perspective on the data that the interleaver size can be reduced without significantly reducing the performance of the system. The decoding units must be matched to their respective encoding units, as they have to have full knowledge of all possible state transitions in the encoder. However, in order to use two (or more) different constituent encoding units, each encoding unit must have the same constraint length. In order for the extrinsic data to be passed in the way it is in the decoder, both encoding units must have the same encoded space, that is to say that they are of the same order. The encoding units may, however, have different generators.
When designing the constituent encoding units, an attempt is made to maximise the minimum distance between two codewords produced by that encoding unit.
Two constituent codes can be chosen with the same minimum weight characteristics, so that they will have similar performance. The effective free distance of a rate 1/3 parallel-concatenated convolutional code with two identical constituent encoding units in the encoding units is defined as:
As was discussed above, the idea is to maximise Zmin (the minimum codeword weight) and hence dfree, eff. If non-identical constituent encoding units are used in the encoder then the following effective free distance for the PCCC is obtained:
<Desc/Clms Page number 56>
where xi, min is the Zmin for the constituent encoding unit i.
As it is necessary to maximise dfree, eff, then it must be true that mm zmm'as if they are not the same then it would be better to use two identical codes, choosing the code with the greater Zmin. For some short constraint lengths, there will not be two generators with the same minimum distance. However, often there are at least two with the same minimum distance in realistic and practical constraint lengths. Employing non-identical codes will not increase the processing time of the coding scheme, as both must be of the same order.
* If two different constituent encoding units are used then it becomes very difficult, or impossible, to terminate both encoding units by using a well-designed interleaver, as the two codes do not have the same properties. Similarly, if more than two constituent encoding units are to be used, then it is no longer possible to terminate all the encoding units by using a well-designed interleaver. This is because these interleavers work in one specific way, so it would not be possible to have many different interleavers with the same properties.
With the double trellis-termination scheme as proposed in an embodiment of the present invention, two different constituent codes can be used and still terminated, as the tails are specific to that encoding unit and the interleaved data. Similarly, this scheme can be expanded to terminate multiple constituent encoding units, whether they have the same generators or not, and they can then all have different interleavers.
It will be appreciated that when three or more constituent encoding units and three or more corresponding decoding units are employed in an embodiment of the present invention then it is not
<Desc/Clms Page number 57>
essential that all of the constituent codes are terminated. Some benefit of the present invention will still be obtained if only two of the constituent codes are terminated. More benefit will be obtained where more are terminated.
This above-described double trellis-termination scheme works by appending an m-bit tail onto the k-bit data block for each constituent encoding unit. This works by encoding the k-bit block with the first encoding unit and appending the tail to drive the encoding unit back to the zero-state. This m-bit tail is then appended to the k-bit systematic data. The second encoding unit also encodes the k-bit block and appends its own m-bit tail to drive it back to the zero-state (or any other known state). This tail is not sent as systematic data.
It would also be possible to interleave the tail of the first encoding unit and then encode this with the second encoding unit, as is done in the conventional turbo encoder. Then it would be possible to add another tail to the second encoding unit to drive it back to the zero-state (or any other known state) and append this to the systematic data. In this way more bits are sent on the channel, thus reducing the data rate slightly.
On the forward pass over the data in the decoder, once the k-bit data block has been decoded, the m-bit data sequence that is required to terminate the encoder is known. Therefore, although errors can occur here, a guess as to what the sequence should be can be made.
The same is not true of the data, which is assumed to be random. Similarly, on the backward pass over the data, the initial state is known to be zero (or some other known state). Therefore, there are not many possible paths back through the trellis, and the probabilities are adjusted accordingly.
<Desc/Clms Page number 58>
There may be some benefit by appending the second encoder tail to the systematic data. Again this will reduce the data rate slightly. The m-bit sequence is known for each state, and the final state of the encoder is known.
The constituent encoding units in the abovedescribed embodiment are chosen to be recursive as, in this way, each bit will have an effect on every following bit in the sequence. By increasing the 'history'of each bit a more reliable decision can be made at the decoder.
It will be appreciated that a non-recursive encoding unit can also be used. In this case, the 'history'will only be as many bits as there are memory states in the encoding unit, whereas with recursion the 'history'is effectively infinite. However, the effect will diminish over the sequence.
The structure of the turbo code allows for different constituent codes to be used. The decoding units must however be matched to their respective encoding units. However, there is usually more than one way to decode the output of an encoder. The advantages of this scheme become apparent when dealing with constituent encoding units that utilise some form of recursion, and are therefore not easily terminated, and a decoding algorithm that uses both a forwards and backwards pass over the data. Suitable constituent codes for the turbo code structure can be represented by a trellis diagram and will benefit from this scheme, provided the decoding algorithm does perform both a forwards and a backwards pass over the data.
As described above, turbo codes were introduced as a new channel coding scheme that approaches the Shannon theoretical limit for communications on a noisy channel. They have been shown to outperform many other, more traditional coding schemes. Unfortunately, this performance increase comes at the expense of an
<Desc/Clms Page number 59>
increase in the computational complexity involved.
Many more computations have to be performed in the turbo decoder than in some other common schemes, such as convolutional coding. The computational requirements of turbo codes will now be addressed and how these affect the performance of the coding scheme.
Some pointers are given as to how to design codes that both perform well and have a reduced delay. Selection between codes with the same delay can be made based on performance. A method is also presented that allows the prediction of the delay based on the computational complexities of a particular turbo code.
This is not generally a problem for non-real-time data, where delays for decoding are acceptable.
However, for real-time data there is only a very small time window in which to complete encoding, transmission and decoding, before the user becomes aware of an unacceptable delay. It is this problem that has driven the categorisation of the performance of turbo codes in terms of the computations required. In this way codes that perform well can still be chosen, but can be engineered to have the minimum number of calculations for that performance.
The turbo decoder is an iterative process and, as such, involves a lot of calculations. There are three main factors affecting the number of calculations performed in the decoder. These are the block (or interleaver) size, the encoder constraint length and the number of decoder iterations performed. Although the turbo code uses stream-oriented recursive systematic convolutional (RSC) codes, it takes on a block-like structure due to the interleaver.
As described above with reference to Fig. 12, these RSC codes send the data on the channel (hence the term"systematic") and also employ recursion to increase the weight of each bit. The RSC codes employ a shift-register, with certain bits from the register
<Desc/Clms Page number 60>
being fed back and added to the data input, and others being added together to form the output. All additions are modulo-2 and are, therefore, equivalent to the exclusive-OR (XOR) operation. The constraint length m of a code is the number of connections that can be made to the shift-register, or the number of memory bits plus 1.
The structure of the decoder is such that there are two decoding units employing maximum a posteriori (MAP) decoding, each decoding the data from one corresponding encoding unit. One iteration of the decoder involves both decoding units and both interleaving and de-interleaving the data. The time required for interleaving is negligible. Therefore, this time has not been included in the calculations, as it is not significant. For each iteration, however, all the calculations in a decoding unit have to be performed twice.
It is always arranged for the encoding units to start in the zero (or other known) state. In this way the starting state for each block of data is always known, and the decoding units can be initialised accordingly. This initialisation, again, can be performed very quickly, even before the next block has been received.
An important factor in the delay characteristics of turbo codes is that there are no blocking overheads, meaning that the number of calculations is directly proportional to the number of bits in a block. The same number of calculations are performed to decode one block of 2048 bits as to decode two blocks of 1024 bits. Due to this fact, the block size can be reduced, thereby reducing the blocking delay, without increasing the overall delay. However, this will reduce the performance of the resultant code, particularly its resilience to burst errors.
<Desc/Clms Page number 61>
One problem with turbo codes is that the constraint length does not increase decoder calculations linearly. Each encoder has 2m-1 states and 2m state transitions, where m is the constraint length of the encoder. A weight is assigned to each of these state transitions during the decoding of each bit.
Both a forwards and a backwards pass over the whole block is performed in each decoding unit.
Therefore, the number of weights calculated in each decoding unit for every iteration is given by the following: weights = 2"'x k where k is the number of bits in the block. Each of these weights requires a complex calculation. After they have all been calculated, the next step is to calculate the log-likelihood ratio (LLR) for the sequence, which requires yet another pass over the block.
The decoder calculations have been broken down into their constituent parts, i. e. how many multiplications, additions, etc. are required for each iteration of the decoder. The following are the number of each type of calculation required for each bit in a block, for one decoding unit. The formulae are expressed in terms of the constraint length, m, of the encoding units.
Exponential = 2m Logarithm (Base 10) = 1 Shift (Division by 2) =2m Multiplication =2"'+2' Addition =2m+2+22m-2 Division = 2111+1 + 1 Each of these formulae must be multiplied by the block (interleaver) size, k, and twice the number of decoder iterations performed, i (since there are two
<Desc/Clms Page number 62>
decoding units, both of which are used in each iteration).
The formulae can be simplified into one conservative approximation of the number of calculations required, using the multiplication as a unit of measure. The shift operation is very simple, and requires little processor time. Addition can be performed more quickly than multiplication, whereas division is marginally slower.
First, however, the exponential and logarithm functions must be expressed in terms of multiplications. In order to achieve this, the first step is to produce the power series that describes the function. For example, the exponential function may be expressed as follows:
From this it can be shown that to evaluate eZ to n significant bits on a computer will take approximately O (M (n) logn) steps, where M (n) is the number of steps required to multiply two n-bit numbers. This is similar for the logarithm function.
Combining this with the previous formulae and assumptions, an approximation for the total number of calculations, C, required to decode a block of data can be obtained, in terms of computer multiplications:
where i is the number of decoder iterations performed. This equation is based on optimised arithmetic, i. e. specially written code to perform calculations to a pre-specified accuracy very quickly.
The reader is referred to the many texts of D. E. Knuth for more information.
The performance differences between different codes with similar computational requirements can be seen from the graph of Fig. 23. From this graph it is apparent that a code with a constraint length of four
<Desc/Clms Page number 63>
and six decoder iterations requires a similar number of calculations as a code with a constraint length of five and two decoder iterations. By considering this performance difference, codes can be better tailored to real-time applications.
The comparison of the performance of the two codes just mentioned-one with a constraint length of four and six decoder iterations and the other with a constraint length of five and two decoder iterationscan be seen in the graph in Fig. 24. The generator matrixes used are (13, ll) o and (33, 26) o respectively, and the interleaver length is 1024 bits.
The graph of Fig. 24 shows that, although the number of calculations are similar, the performances are quite different. The code with six decoder iterations and a constraint length of four outperforms the other. It does take slightly longer to calculate, however, but only by about 10%.
This discrepancy in the calculation time can be explained by the fact that the simulations were run on a computer using the normal built-in functions.
However, the formula for the number of calculations assumes efficient optimised processing functions.
Clearly the generator matrixes used also affect the performance. However, several different generator matrixes have been simulated for this comparison and all show similar results.
The results have shown that increasing the decoder iterations and decreasing the constraint length gives the best performance for a particular number of calculations, even though this does reduce the code's ability to correct burst errors. The number of calculations performed directly affects the time taken to calculate the code and, therefore, the coding delay.
From this analysis it is possible to select codes for best performance that have the same inherent delay.
To achieve this, the required maximum delay is
<Desc/Clms Page number 64>
selected, based on the number of calculations, and the best performing code with that number of calculations is then chosen. Similarly, a particular performance level could be chosen based on what is required for the channel in question, and the code with the least coding delay selected. Again the coding delay is based on the number of calculations required.
The processor time required to decode the data can be reduced by reducing the accuracy of the results of the calculations. Although the turbo decoder is a soft-input soft-output algorithm, analogue data is not actually taken in. Instead, as discussed above with reference to Fig. 3, eight levels of quantization are usually used. This reduces the performance, but only by 0. 2dB. Using an analogue input instead of a digital input gives rise to a 2.2dB performance gain. A similar technique can be applied within the decoding units to improve speed and reduce delay.
The graph in Fig. 25 shows the performance of a code with generator (13, 11) 0 in which different accuracies of the extrinsic data sent between decoders are employed. The first is as accurate as the standard algorithms will manage (i. e. near analogue resolution) ; the second is to 3 decimal places and the last uses only integers. As can be seen, the performance reduction is not great; indeed for a signal-to-noise Ratio (SNR) of around 1dB there is a performance drop of less than O. ldB between the most accurate and the integer calculations.
. These results are based on reducing the accuracy of the extrinsic data sent between the decoders. This extrinsic data is the soft output from the decoder that correlates to a probability that a particular bit is either a zero or a one. Reducing the accuracy of the calculations within the decoding units themselves is not so straightforward. The use of integer arithmetic within the decoder units would not be possible due to
<Desc/Clms Page number 65>
the algorithms used. However, accuracy can be reduced to only a few decimal places, thereby reducing the computational overheads.
Turbo codes are computationally intensive, which introduces coding delay. This delay hinders the use of turbo codes in real-time situations. However, the blocking delay can be reduced by reducing the block size. In this case there is no computational price to pay for splitting the data into more blocks. It does, however, slightly reduce the code's performance by doing this, particularly in the case of burst errors.
Furthermore, it has also been demonstrated that it is inherently better for performance to reduce the constraint length of the encoding units rather than the number of decoder iterations. It has also been shown that the accuracy of the decoders can be reduced to improve the speed of computation, without reducing the performance significantly. Performance reductions of only 0. IdB have been observed in many codes for a reduction to integer values for the extrinsic data.
A formula giving an approximation of the number of calculations required, in terms of computer multiplications, has also been provided. From this, codes can now be designed and chosen based on the minimum delay required for a particular output performance.
Although it has been described above that the input data is received as a series of data bits, it will be appreciated that it is not necessary that the data arrive in a serial fashion; they could, for example, be input at a set of parallel inputs.
Both of the constituent encoding units 50 and 70 in Fig. 19 are described above as being formed of a recursive systematic convolutional encoder such as thatshown in Fig. 12 in which a single received data bit is encoded to a single encoded data bit. It will be appreciated that an embodiment of the present invention
<Desc/Clms Page number 66>
is applicable more generally to the case where the encoding units are operable to encode a received single data bit into a plurality of encoded bits for transmission, and also to the case where more than one data bit is encoded as a group to one or more corresponding encoded data bits.
Each of the encoding units can therefore be considered generally to perform an encoding operation to encode a received data item (e. g. one or more bits) into an encoded data item (e. g. one or more corresponding bits).
It will also be appreciated that the encoding units are not limited to the recursive systematic convolutional type of encoder, but is more generally applicable to other types of encoding unit. One suitable type of encoding unit may be described as being selectable between a finite set of states and being operable to produce an encoded data item in dependence upon both a received data item and upon the state of the encoding unit at the time of encoding the data item, and where the resulting state of the encoding unit is set in dependence upon both the received data item and upon the state of the encoding unit at the time of encoding the data item. For example, as already mentioned above, the encoding unit could be formed of a non-recursive encoder.
Fig. 29 is a flow diagram for explaining a method embodying the present invention for producing a code from a systematic (or basic) series of data items.
Reference will be made to the encoder 40 embodying the present invention described above with reference to Fig. 19. In step Sl of Fig. 29, the first and second encoding units are preferably initialised to a predetermined starting state (for example to the zero state) prior to any encoding operations being performed. In step S2 a first encoding operation is performed to encode the systematic (basic) series of
<Desc/Clms Page number 67>
data items into a first series of encoded data items which is included in the code. In step S3 the systematic series of data items is interleaved or reordered to produce an interleaved or reordered series of data items, being the systematic series of data items arranged in a different order. In step S4 a second encoding operation is performed to encode the interleaved series of data items into a second series of encoded data items which is included in the code.
In steps S5 and S6 the first and second encoding units are terminated by generating a first and a second systematic (basic) series of tail items which are encoded by the encoding units and which are suitable to drive the encoding units to a predetermined final state, thereby producing first and second series of encoded tail items which are included in the code. The systematic series of data items is included in the code in step S7 (not necessarily in the same order), while the first or the second systematic series of tail items is included in the code in step S8. Finally, the code is output in step S9. It is not essential to include the systematic series of data items, the first systematic series of tail items or the second systematic series of tail items in the code.
Fig. 30 is a block diagram showing a code 500 embodying the present invention which is produced from the systematic series of data items 400 by the method described above with reference to Fig. 29. The code 500 comprises a first portion 520 having a first series of encoded data items produced by performing the first encoding operation to encode the systematic series of data items and a second portion 530 having a second series of encoded data items produced by performing a second encoding operation to encode an interleaved series of data items, being the systematic series of data items arranged in a different order. The code 500
<Desc/Clms Page number 68>
also comprises third and fourth portions 550 and 560 respectively having first and second series of encoded tail items produced by terminating the encoding means used to perform the encoding operations following the first and second encoding operations by generating a first and a second systematic series of tail items which are encoded by the encoding means after the first and second encoding operations respectively and which are suitable to drive the encoding means to a predetermined final state following those operations.
The code 500 may optionally comprise a fifth portion 510 having the systematic series-of data items and a sixth portion having the first or second systematic series of tail items.

Claims (63)

  1. CLAIMS 1. Data encoding and decoding apparatus comprising: encoding means for performing a first encoding operation to encode a basic series of data items into a first series of encoded data items and for performing a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items; terminating means for generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations; and a turbo decoder having first and second decoding means corresponding respectively to said first and second encoding operations for performing an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
  2. 2. Apparatus as claimed in claim 1, wherein said turbo decoder is operable to pass extrinsic information relating to the data items from one decoding means to another, and to retain extrinsic information relating to the tail items for use only within the decoding means to which the tail items relate without passing it to another decoding means.
  3. 3. Apparatus as claimed in claim 1 or 2, wherein said encoding means comprise a single encoding unit to
    <Desc/Clms Page number 70>
    perform said first and second encoding operations one after the other.
  4. 4. Apparatus as claimed in claim 1,2 or 3, wherein said encoding means comprise first and second encoding units to perform said first and second encoding operations respectively.
  5. 5. Apparatus as claimed in claim 4, wherein said first and second encoding units perform their respective encoding operations in parallel.
  6. 6. Apparatus as claimed in any one of claims 3 to 5, wherein the or each said encoding unit is selectable between a finite set of states and is operable to produce an encoded item in dependence upon both the item to be encoded and upon the state of the encoding unit, the resulting state of the encoding unit being dependent upon both the item to be encoded and upon the state of the encoding unit.
  7. 7. Apparatus as claimed in any one of claims 3 to 6, wherein the or each said encoding unit is a convolutional encoder.
  8. 8. Apparatus as claimed in any one of claims 3 to 7, wherein the or each said encoding unit is a recursive encoder.
  9. 9. Apparatus as claimed in any one of claims 4 to 8, wherein said first encoding unit is a recursive systematic encoder.
  10. 10. Apparatus as claimed in any one of claims 7 to 9, wherein said first encoding unit is of a different structure to said second encoding unit.
    <Desc/Clms Page number 71>
  11. 11. Apparatus as claimed in claim 10, wherein the generator of the first encoding unit is different to that of the second encoding unit.
  12. 12. Apparatus as claimed in claim 10 or 11, wherein the constraint length and order of the first and second encoding units are the same.
  13. 13. Apparatus as claimed in any preceding claim, wherein said predetermined final state is the zero state.
  14. 14. Apparatus as claimed in any preceding claim, comprising interleaving means for producing said reordered series of data items by permuting said basic series of data items.
  15. 15. Apparatus as claimed in any one of claims 1 to 13, comprising interleaving means for producing said reordered series of data items by first appending said first basic series of tail items to said basic series of data items and then permuting the resulting combined series of items.
  16. 16. Apparatus as claimed in claim 14 or 15, wherein said interleaving means permute in a pseudo-random fashion.
  17. 17. Apparatus as claimed in any preceding claim, further comprising transmission means for transmitting said first and second series of encoded data items from said encoding means to said turbo decoder.
  18. 18. Apparatus as claimed in claim 17, wherein said transmission means are further operable to transmit a
    <Desc/Clms Page number 72>
    first and/or a second series of encoded tail items, produced respectively by encoding said first and said second basic series of tail items, from said encoding means to said turbo decoder.
  19. 19. Apparatus as claimed in claim 18, wherein said first decoding means are operable to process said first series of encoded data items and said first series of encoded tail items, and second decoding means are operable to process said second series of encoded data items and said second series of encoded tail items.
  20. 20. Apparatus as claimed in claim 17,18 or 19, wherein said transmission means are further operable to transmit said basic series of data items from said encoding means to said turbo decoder.
  21. 21. Apparatus as claimed in claim 20, wherein said basic series of data items is permuted before transmission and de-permuted upon reception.
  22. 22. Apparatus as claimed in claim 20 or 21, wherein said first and said second decoding means are operable to process said basic series of data items, said turbo decoder further comprising interleaving means for permuting said basic series of data items into the correct order for said second decoding means.
  23. 23. Apparatus as claimed in claim 22, when read as appended to claim 2, wherein said interleaving means are also operable to permute said extrinsic information passed from said first decoding means to said second decoding means into the correct order for said second decoding means.
    <Desc/Clms Page number 73>
  24. 24. Apparatus as claimed in claim 23, further comprising de-interleaving means for permuting the extrinsic information passed from said second decoding means to said first decoding means into the correct order for the first decoding means.
  25. 25. Apparatus as claimed in any one of claims 17 to 24, wherein said transmission means are further operable to transmit said first and/or said second basic series of tail items to said turbo decoder.
  26. 26. Apparatus as claimed in any-preceding claim, wherein an item to be encoded is a single bit.
  27. 27. Apparatus as claimed in any preceding claim, wherein an encoded item is a single bit.
  28. 28. Apparatus as claimed in any preceding claim, when read as appended to claim 2, further comprising decision means for producing said series of decoded data items based on said extrinsic information.
  29. 29. Apparatus as claimed in any preceding claim, when read as appended to claim 2, wherein said extrinsic information represents the likelihood of a decoded data item in. said series of decoded data items being of a particular type.
  30. 30.. Apparatus as claimed in any preceding claim, wherein said encoding means are further operable to perform one or more further encoding operations to encode one or more corresponding further reordered series of data items, being said basic series of data items arranged in different respective orders, into one or more corresponding further series of encoded data items.
    <Desc/Clms Page number 74>
  31. 31. Apparatus as claimed in claim 30, wherein said terminating means are also operable to terminate said encoding means following at least one of said one or more further encoding operations by generating at least one corresponding further basic series of tail items which is or are encoded by said encoding means after said at least one further encoding operation and which is or are suitable to drive said encoding means to a predetermined final state following that operation or those operations.
  32. 32. Apparatus as claimed in any preceding claim, wherein the backward passes in the first decoding means are initialised based on said predetermined final state following the first encoding operation.
  33. 33. Apparatus as claimed in any preceding claim, wherein said encoding means are initialised to a predetermined starting state prior to said first and said second encoding operations.
  34. 34. Apparatus as claimed in claim 33, wherein the forward passes in a decoding means are initialised based on said predetermined starting state following the corresponding encoding operation.
  35. 35. Apparatus as claimed in claim 33 or 34, wherein said predetermined starting state is the zero state.
  36. 36. A data encoding and decoding method comprising the steps of: employing encoding means to perform a first encoding operation to encode a basic series of data items into a first series of encoded data items;
    <Desc/Clms Page number 75>
    employing said encoding means to perform a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items; generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations;. and employing first and second decoding means corresponding respectively to said first and second encoding operations to perform an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
  37. 37. Data encoding apparatus comprising: encoding means for performing a first encoding operation to encode a basic series of data items into a first series of encoded data items and for performing a second encoding operation to encode a reordered series of data. items, being said basic series of data items arranged in a different order, into a second series of encoded data items; and . terminating means for generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations.
    <Desc/Clms Page number 76>
  38. 38. A data encoding method for producing a code comprising the steps of: employing encoding means to perform a first encoding operation to encode a basic series of data items into a first series of encoded data items for inclusion in said code; employing said encoding means to perform a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order, into a second series of encoded data items for inclusion in said code; and generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations, thereby producing first and second series of encoded tail items for inclusion in said code.
  39. 39. A method as claimed in claim 38, further comprising the step of including said basic series of data items in said code.
  40. 40. A method as claimed in claim 38 or 39, further comprising the step of including said first basic series of tail items in said code.
  41. 41. A method as claimed in claim 38,39 or 40, further comprising the step of including said second basic series of tail items in said code.
  42. 42. Data decoding apparatus for decoding codes produced by the method as claimed in any one of claims 38 to 41, said data decoding apparatus comprising a turbo decoder having first and second decoding means corresponding respectively to said first and second
    <Desc/Clms Page number 77>
    encoding operations for performing an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
  43. 43. A data decoding method for decoding codes produced by the method as claimed in any one of claims 38 to 41, said data decoding method comprising the step of employing first and second decoding means corresponding respectively to said first and second encoding operations to perform an iterative turbo decoding operation to produce a series of decoded data items corresponding to said basic series of data items in which the backward passes in the second decoding means are initialised based on said predetermined final state following the second encoding operation.
  44. 44. A user equipment, for use in a mobile communications network, comprising data encoding apparatus as claimed in claim 37 and/or data decoding apparatus as claimed in claim 42.
  45. 45. A base station, for use in a mobile communications network, comprising data encoding apparatus as claimed in claim 37 and/or data decoding apparatus as claimed in claim 42.
  46. 46. A communications network comprising data encoding and decoding apparatus as claimed in any one of claims 1 to 35.
  47. 47. A code produced by a data encoding method comprising:
    <Desc/Clms Page number 78>
    a first portion having a first series of encoded data items produced by employing encoding means to perform a first encoding operation to encode a basic series of data items; a second portion having'a second series of encoded data items produced by employing said encoding means to perform a second encoding operation to encode a reordered series of data items, being said basic series of data items arranged in a different order; and third and fourth portions respectively having first and second series of encoded tail items produced by generating a first and a second basic series of tail items which are encoded by said encoding means after said first and second encoding operations respectively so as to drive said encoding means to a predetermined final state following each of those operations.
  48. 48. A code as claimed in claim 47, further comprising a fifth portion having said basic series of data items.
  49. 49. A code as claimed in claim 47 or 48, further comprising a further portion having said first basic series of tail items.
  50. 50. A code as claimed in claim 47,48 or 49, further comprising a further portion having said second basic series of tail items.
  51. 51. A code as claimed in any one of claims 47 to 50, carried on a carrier medium.
  52. 52. A code as claimed in claim 51, wherein the carrier medium is a transmission medium.
  53. 53. A code as claimed in claim 51, wherein the carrier medium is a storage medium.
    <Desc/Clms Page number 79>
  54. 54. Data encoding and decoding apparatus substantially as hereinbefore described with reference to Figs. 19 to 30.
  55. 55. Data encoding apparatus substantially as hereinbefore described with reference to Figs. 19 to 30.
  56. 56. Data decoding apparatus substantially as hereinbefore described with reference to Figs. 19 to
    30.
  57. 57. A data encoding and decoding method substantially as hereinbefore described with reference to Figs. 19 to 30.
  58. 58. A data encoding method substantially as hereinbefore described with reference to Figs. 19 to 30.
  59. 59. A data decoding method substantially as hereinbefore described with reference to Figs. 19 to 30.
  60. 60. A user equipment substantially as hereinbefore described with reference to Figs. 19 to 30.
  61. 61.. A base station substantially as hereinbefore described with reference to Figs. 19 to 30.
  62. 62. A communications network substantially as hereinbefore described with reference to Figs. 19 to 30.
    <Desc/Clms Page number 80>
  63. 63. A code substantially as hereinbefore described with reference to Figs. 19 to 30.
GB0204910A 2002-03-01 2002-03-01 Data encoding and decoding apparatus and a data encoding and decoding method Expired - Fee Related GB2386039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0204910A GB2386039B (en) 2002-03-01 2002-03-01 Data encoding and decoding apparatus and a data encoding and decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0204910A GB2386039B (en) 2002-03-01 2002-03-01 Data encoding and decoding apparatus and a data encoding and decoding method

Publications (3)

Publication Number Publication Date
GB0204910D0 GB0204910D0 (en) 2002-04-17
GB2386039A true GB2386039A (en) 2003-09-03
GB2386039B GB2386039B (en) 2005-07-06

Family

ID=9932137

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0204910A Expired - Fee Related GB2386039B (en) 2002-03-01 2002-03-01 Data encoding and decoding apparatus and a data encoding and decoding method

Country Status (1)

Country Link
GB (1) GB2386039B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879515B (en) * 2023-02-20 2023-05-12 江西财经大学 Document network theme modeling method, variation neighborhood encoder, terminal and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000013323A1 (en) * 1998-08-27 2000-03-09 Hughes Electronics Corporation Method for a general turbo code trellis termination
WO2000022739A1 (en) * 1998-10-13 2000-04-20 Interdigital Technology Corporation Hybrid interleaver for turbo codes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000013323A1 (en) * 1998-08-27 2000-03-09 Hughes Electronics Corporation Method for a general turbo code trellis termination
WO2000022739A1 (en) * 1998-10-13 2000-04-20 Interdigital Technology Corporation Hybrid interleaver for turbo codes

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Proceedings of the International Conference on Communications (ICC)", 1995, IEEE, pp 54-59 *
Electronics Letters, Vol. 31, No. 1, January 1995, pages 22 and 23 *
Electronics Letters, Vol. 31, No. 24, November 1995, pages 2082 to 2084 *
IEEE Communications Letters, Vol. 1, No. 3, May 1997, pages 77 to 79 *

Also Published As

Publication number Publication date
GB2386039B (en) 2005-07-06
GB0204910D0 (en) 2002-04-17

Similar Documents

Publication Publication Date Title
JP3857320B2 (en) Parallel connected tail biting convolution codes and decoders thereof
US6014411A (en) Repetitive turbo coding communication method
JP3610329B2 (en) Turbo coding method using large minimum distance and system for realizing the same
CA2273418C (en) Tail-biting turbo-code encoder and associated decoder
US6044116A (en) Error-floor mitigated and repetitive turbo coding communication system
Divsalar et al. Hybrid concatenated codes and iterative decoding
Burr Turbo-codes: the ultimate error control codes?
EP1119110A1 (en) Digital transmission method of the error-correcting coding type
WO1998011671A1 (en) An improved system for coding signals
US6028897A (en) Error-floor mitigating turbo code communication method
Riedel MAP decoding of convolutional codes using reciprocal dual codes
Zhu et al. Transmission of nonuniform memoryless sources via nonsystematic turbo codes
JP3674851B2 (en) Scaling feedback turbo decoder
JP2001257601A (en) Method for digital signal transmission of error correction coding type
US6961894B2 (en) Digital transmission method of the error-correcting coding type
JP2001257600A (en) Encoding method, encoding device, decoding method, decoding device and system using them
RU2301492C2 (en) Method and device for transmitting voice information in digital radio communication system
GB2386039A (en) Dual termination of turbo codes
US7225392B2 (en) Error correction trellis coding with periodically inserted known symbols
Calhan et al. Comparative performance analysis of forward error correction techniques used in wireless communications
KR20070112326A (en) High code rate turbo coding method for the high speed data transmission and therefore apparatus
Biradar et al. Design and Implementation of Secure and Encoded Data Transmission Using Turbo Codes
Knickenberg et al. Non-iterative joint channel equalisation and channel decoding
JP3274114B2 (en) Decoder and method for decoding frame oriented turbo code
SINDHU VLSI Implementation of Turbo codes for LTE Systems

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20140301