GB2523586A - Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code - Google Patents

Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code Download PDF

Info

Publication number
GB2523586A
GB2523586A GB1403573.7A GB201403573A GB2523586A GB 2523586 A GB2523586 A GB 2523586A GB 201403573 A GB201403573 A GB 201403573A GB 2523586 A GB2523586 A GB 2523586A
Authority
GB
United Kingdom
Prior art keywords
decoding
bits
code
words
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1403573.7A
Other versions
GB2523586B (en
GB201403573D0 (en
Inventor
Mounir Achir
Philippe Le Bars
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1403573.7A priority Critical patent/GB2523586B/en
Publication of GB201403573D0 publication Critical patent/GB201403573D0/en
Publication of GB2523586A publication Critical patent/GB2523586A/en
Application granted granted Critical
Publication of GB2523586B publication Critical patent/GB2523586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2933Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using a block and a convolutional code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • H03M13/2978Particular arrangement of the component decoders
    • H03M13/2984Particular arrangement of the component decoders using less component decoders than component codes, e.g. multiplexed decoders and scheduling thereof
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0054Maximum-likelihood or sequential decoding, e.g. Viterbi, Fano, ZJ algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0059Convolutional codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0064Concatenated codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0064Concatenated codes
    • H04L1/0065Serial concatenated codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M5/00Conversion of the form of the representation of individual digits
    • H03M5/02Conversion to or from representation by pulses
    • H03M5/04Conversion to or from representation by pulses the pulses having two levels
    • H03M5/06Code representation, e.g. transition, for a given bit cell depending only on the information in that bit cell
    • H03M5/12Biphase level code, e.g. split phase code, Manchester code; Biphase space or mark code, e.g. double frequency code

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Error Detection And Correction (AREA)

Abstract

A method for decoding a bitstream encoded with an outer convolutional code and an inner block code C(N,K) is disclosed. It presents a decoding method for a coded bitstream encoded by successively applying a convolutional code and a DC free line code. The corresponding decoding uses a combined maximum likelihood decoder and DC free line decoding that improve the error correction capabilities of the decoding compared to performing separate DC free line decoding followed by the maximum likelihood decoding. In the preferred embodiment, the maximum likelihood decoder is the Viterbi algorithm and the DC free line decoding relates to 8B10B decoding. Accordingly, the correction efficiency of the convolutional code is improved. One embodiment relates to decoding a coded bitstream encoded with an outer convolutional code and an inner block code comprising: calculating a branch metric between words of N bits in the encoded bitstream and each allowed word of N bits; and calculating a cumulative metric for each possible sequence of candidate code words of N bits by summing the corresponding calculated branch metrics; and selecting a sequence of candidate code words of N bits based on the cumulative metric; and determining a sequence of words of K bits corresponding to the selected sequence of candidate code words; and retrieving the decoded bitstream from the determined sequence of words of K bits.

Description

METHOD AND A DEVICE FOR DECODING A BITSTREAM ENCODED WITH
AN OUTER CONVOLUTIONAL CODE AND AN INNER BLOCK CODE
The present invention concerns a method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code C(N,K).
A convolutional code is a type of error-correcting code in which each m-bit information to be encoded is transformed into an n-bit symbol, where mm is called the code rate and n is greater or equal to m. The transformation is a function of last k information symbols, where k is the constraint length of the code. Several algorithms exist for decoding convolutional codes. For relatively small values of k, the Viterbi algorithm is widely used as it provides close to maximum likelihood performance and is highly parallelizable. Convolutional encoding with Viterbi decoding is a forward error correction technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by additive white gaussian noise (AWGN).
When describing a periodic function in the frequency domain, the DC component is the mean value of the waveform. In contrast, various other frequencies are analogous to superimposed AC voltages or currents, hence called AC components. The term originated in electronics, where it refers to a direct current (DC) voltage, but the concept has been extended to any representation of a waveform. A DC component is usually undesirable when it causes saturation or change in the operating point of an amplifier.
When digital bitstream are modulated into analog signal for transmission, a DC component tends to appear in the signal when the number of 1 and 0 is not well balanced in the digital bitstream to transmit. A DC free communication system uses a DC free line coding scheme in order to suppress the DC component that could not travel correctly in the communication channel and/or in the transceivers. The DC free line coding maintains an RDS value (Running Digital Sum) to a bounded value. The RDS value is typically the number of 1 in the bitstream versus the number of 0. It represents a measure of the balance between 1 and 0. Some example of such DC free line code are 8B103, 64B66B, etc.. These codes are already implemented in some standards (PCI express, Gigabit Ethernet, DVI, HDMI, etc.). These codes, while adding some information into the signal, for example the 8BIOB line code transforms each word of 6 bits into a word of 10 bits, do not add real redundancy. They do not add any error correcting capability into the transmission system.
Additionally, a convolutional code feature is applied to the data in order to perform the error correction and hence a combination of a line coding with an error correction coding is commonly used in order to have both error correction and DC free capabilities. Of course, to be effective, the DC free line code must be done just prior the modulation in order to provide a DC free transmission.
Such combination, therefore, operates first a convolutional encoding of the bitstream to be transmitted. On the output of the convolutional encoding, a DC free line code is then applied before the transmission.
At reception, logically inverse operations are carried out. First, a DC free line decoding is performed. The result of the line decoding is submitted to a Viterbi decoding phase for error correction and retrieving the transmitted bitstream.
When measuring the bit error rate (BER) at different stage of the decoding it shows the following. The demodulated signal presents a certain level of BER due to transmission errors. The BER is then increased by the line decoding. This is not surprising as we remind that DC free line coding do not have error correction capabilities. Actually, DC free line coding is typically using a dictionary based encoding. A single bit error in the transmitted bitstream may lead to a totally different decoded word introducing several erroneous bits in the decoded word and therefore increasing the BER. The measurement made at the output of the Viterbi decoding present an improved BER typically close to the BER obtained for the demodulated signal.
The present invention has been devised to address one or more of the foregoing concerns. In one of its embodiment, the invention presents a decoding method for a coded bitstream encoded by successively applying a convolutional code and a DC free line code. The corresponding decoding uses a combined maximum likelihood decoder and DC free line decoding that improve the error correction capabilities of the decoding compared to the application to separate DC free line decoding followed by the maximum likelihood decoder decoding. In the preferred embodiment, the maximum likelihood decoder is the Viterbi algorithm.
According to a first aspect of the invention there is provided a method of decoding a coded bitstream, said bitstream being encoded in accordance with an outer convolutional code and an inner block code C(N, K), the inner block code being encoding each word of K bits into a word of N bits among a plurality of allowed words of N bits, each allowed word of N bits corresponding to the encoding of one and only one word of K bits, the method comprising: calculating a branch metric between words of N bits in the coded bitstream and each allowed word of N bits; and calculating a cumulative metric for each possible sequence of candidate code words of N bits by summing the corresponding calculated branch metrics; and selecting a sequence of candidate code words of N bits based on the cumulative metric; and determining a sequence of words of K bits corresponding to the selected sequence of candidate code words; and retrieving the decoded bitstream from the determined sequence of words ofKbits.
Accordingly, the correction efficiency of the convolutional code is improved.
In an embodiment, the outer convolutional code defines a plurality of allowed words of K bits and wherein the allowed words of N bits correspond to the encoding by the inner block code of the allowed words of K bits.
In an embodiment, the branch metrics are calculated for all successive words of N bits in the coded bitstream.
Accordingly the accuracy of the decoding is improved.
In an embodiment, said plurality of allowed words of N bits comprises all possible words of N bits corresponding to the encoding of one and only one Accordingly the accuracy of the decoding is improved.
In an embodiment, decoding is made according to a Viterbi algorithm.
Accordingly the speed of the decoding is improved.
In an embodiment, the branch metrics are calculated for a subset of all successive words of N bits in the coded bitstream. Accordingly, the computation needs are lower.
In an embodiment, said plurality of allowed words of N bits consists in a subset of all possible words of N bits corresponding to the encoding of one and only one word of K bits. Accordingly, the computation needs are lower.
In an embodiment, the inner block code is the 8B1OB code.
According to another aspect of the invention there is provided a device for decoding a coded bitstream, said bitstream being encoded in accordance with an outer convolutional code and an inner block code C(N, K), the inner block code being encoding each word of K bits into a word of N bits among a plurality of allowed words of N bits, each allowed word of N bits corresponding to the encoding of one and only one word of K bits, the device comprising: a branch metric module for calculating a branch metric between words of N bits in the coded bitstream and each allowed word of N bits; and a cumulative metric module for calculating a cumulative metric for each possible sequence of candidate code words of N bits by summing the corresponding calculated branch metrics; and a selector module for selecting a sequence of candidate code words of N bits based on the cumulative metric; and a sequence determining module for determining a sequence of words of K bits corresponding to the selected sequence of candidate code words; and a retrieving module for retrieving the decoded bitstream from the determined sequence of words of K bits.
According to another aspect of the invention there is provided a terahertz receiver comprising a device according to the invention.
According to another aspect of the invention there is provided a computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to the invention, when loaded into and executed by the programmable apparatus.
According to another aspect of the invention there is provided a computer-readable storage medium storing instructions of a computer program for implementing a method according to the invention.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 illustrates an example of encoder providing a coded bitstream to be decoded by the invention; Figure 2 illustrates an example of convolutional code encoder; Figure 3 illustrates the associated trellis; Figure 4 illustrates an example of decoder in the prior art; Figure 5 illustrates a decoder according to an embodiment of the invention; Figure 6 illustrates the trellis used in the described embodiment of the invention; Figure 7 illustrates the algorithm used to compute the branch metric in the described embodiment of the invention; Figure 8 illustrates the algorithm used to compute the cumulative branch metric values in the described embodiment of the invention; Figure 9 is a schematic block diagram of a computing device for implementation of one or more embodiments of the invention.
In the present embodiment of the invention, an 8B1OB line coding scheme is used to suppress the DC component and a classical binary convolutional code (3CC) is used to perform the error correction. At the receiver side, the classical way to perform the decoding is to apply first the 8B1OB decoding and then the Viterbi decoding (or MAP; Maximum A Posteriori criterion). By doing so, the 8BIOB decoding inserts additional errors, in addition to the ones that are inserted by the communication channel and/or the transceivers impairments, since the 8B1OB decoding is a DC free line decoding and not an ECC one. The original way that is proposed in this invention is to perform a joint Viterbi 83103 decoding. The decoding is performed directly on the 8B1OB received sequence. The joint 8B1OB BCC decoding is applied by computing the branch metrics (and hence the cumulative branch metrics) between the 8B1OB received sequence and all the 8B1OB encoded sequences obtained from the binary convolutional code trellis.
Figure 1 illustrates an example of encoder providing a coded bitstream to be decoded by the invention. The encoder 100 is composed by a convolutional code 101 concatenated with an 83103 block code 102. The convolutional code could be any known convolutional code and no restriction is required in this present invention. For example, a table containing an optimum constraint length convolutional code having coding rates equal to 1/2 and 1/3 is available in "B. Sic/ar, "Digital Communications: Fundamentals and Applications", Prentice Hall 2001, ISBN: 0-13-084788-7". Same way, any DC free line code of the form C(N,K) may be used in place of the 8B1OB of the present embodiment. The block 103 is the bipolar modulation where the bits are converted into positive or negative integers (+1 and -1). The bit "0" is coded with "+1" and the bit "1" is coded with "-1". Here also, any kind of modulation may be used for the physical transmission of the coded bitstream. However, the invention is particularly suited for very high bandwidth transmission using simple modulation. It may be used in particular for terahertz transmission using transmission band from 100 GHz up to 10 THz or from 0.3 GHz up to 3 THz.
B
Figure 2 illustrates an example of convolutional code encoder. The figure represents the shift register convolutional code representation of the convolutional code [7 5]. The 7' in binary is represented in binary by [lii], this means that the first parity bit CO is computed by applying the XOR operator on the bits [S2 Si SO] where SO is the first bit, of the uncoded bitstream, inputted in the shift register. The 5' is represented in binary by [i 0 i], it means that the second parity bit Cl is computed by applying the XOR operator on the bits [52 SO]. After each computation the bits are shifted at right and the bits CO Ci are concatenated to form the coded bitstream. It means that for each input bit entered in the register the convolutional encoder produces two output bits, CO and Ci, to form the output bitstream. A convolutional code always increases the size of the bitstream by adding some redundancy used for the error correction.
Figure 3 illustrates the associated trellis. Plain arrows correspond to an iS input bit equal to "0" (i.e. 52 = 0) and dashed arrows correspond to an input bit is equal to "i" (i.e. S2 = 1). The trellis corresponds to the all possible transitions of the encoder considered as a finite state machine (ESM). The state of the FSM is constituted by the values of the last two bits, namely Si and SO in our example. Reference 300 represents the input states of the FSM. Reference 301 the output states. When in a state 300, considering an input bit, the FSM goes to the output state 301 according to the arrow corresponding to the input bit (plain or dashed) and produces an output of two bits corresponding to the label of the arrow.
The trellis is very helpful in understanding the decoding step. Indeed, the well-known Viterbi algorithm is commonly used to decode convolutional codes and this by performing two major steps. The first step is the branch metric computation (BM). The branch metric corresponds to the distance between a received sequence of bits and possible output of a running step of the encoder.
In our example, the encoder produces two output bits for each encoding step.
The possible two bits output at each step may be the sequences 00, 01, 10 and ii. The branch metric computation consists in computing a distance between a received sequence of two bits, noted r, and each possible output of the encoder. Several distances may be used here, for example Euclidian or Hamming. An example of such branch metric in our example may be: BMOO = (r(2 t -1) -(O))2 + (r(2 t) -(O))2 BMO1 = (r(2 t -1) -(O))2 + (r(2 t) -(1))2 (1) BM1O = (r(2 * t -1) -(fl)2 + (r(2 * t) -(O))2 BM1I. = (r(2 * t -1) -(fl)2 + (r(2 * t) -(1))2 Where r(k) represents the kth bit of the received sequence r, and BMIj represents the distance between the received sequence r and the possible encoder output ij. t represents the index of the input bit or may also be seen as the index of the encoder step. The 2 factor illustrates that the coded bitstream contains twice the number of bits than the input bitstream.
After calculating the BM values, the cumulative branch metric (CBM) of each path in the trellis may be computed by summing the branch metric of each transition that construct the said path. The cumulative branch metric may be illustrated as follows: (CBM(1, t + 1) = min(CBM(1, t) + BMOO, CBM(2, t) + BM11) J CBM(2, t + 1) = min(CBM(3, t) + 8Mb, CBM(4, t) + BMO1) 2 ) CBM(3, t + 1) = min(CBM(b, t) + BM11, CBM(2, t) + BMOO) CBM(4, t + 1) = min(CBTvI(3, t) + BMO1, CBM(4, t) + BM1O) Where CBMQ, t) represents the cumulative branch metric for the state of the FSM labeled i at time t. Each state of the FSM is labeled in sequence, in our example state 00 is labeled 1, state 01 is labeled 2, state 10 is labeled 3, and state 11 is labeled 4. For example, the first line expresses that the cumulative branch metric at time t + 1 corresponds to the minimum of the two possible path to arrive at state 1. The first path corresponds to being in state 1 with a received sequence of 00 and is measured by the metric CBM(1, t) + BMOO, while the second path corresponds to being in state 2 with a received sequence of 11 and is measured by the metric CBM(2,t) + BM11. The minimum of the two path is kept as the cumulative metric of state 1 a time t -E 1.
For a given time "t", once the cumulative branch metrics have been computed for each path of the trellis, the path having the smallest CBM value is selected and presented at the output of the convolutional decoder. This method is known as the Viterbi algorithm.
Figure 4 illustrates an example of decoder in the prior art. The decoding is done by performing the 8B1OB decoding and the convolutional code decoding using the Viterbi algorithm successively. The decoder 400 comprises a first module 401 to perform the demodulation of the transmitted signal. At the output of the demodulation module a received bitstream is obtained. This received bitstream is subjected to the DC free line decoder 402, namely a 8B1OB decoder in our example. Next the output of the DC free line decoder is subjected to the convolutional decoder, namely a Viterbi decoder 403 for the error correction of the signal.
Coming back to the operation mode of a typical DC free line coding, it is worth noting that these encoder are typically based on dictionary type encoding.
This means that the bitstream to encode is divided into word of bits, in the 8B1OB example words of 8 bits. Each word of 8 bits is encoded based on some tables into words of 10 bits. For example, a solution may be found in the document "AX Widrner, P.A. Franaszek, "A DC-Balanced, Partitioned-Block, 8B/1OB Transmission Code", IBM Journal of Research and development, vol. 27, No. 5, Sep. 1983, pp. 440-451. This solution consists in providing four tables: table_5b_6b_RD_plus, table_5b_6b_RD_minus, table_3b_4b_RD_plus, table_3b_4b_RD_minus. These tables give for all possible word of 3 and 5 bits a word of respectively 4 and 6 bits. These words are well balanced in term of number of 1 versus number of 0. Each word of 8 bits is divided into a word of 5 bits and a word of 3 bits. According to the value of the RDS (positive of not) a table is chose (table named plus or minus). The word of 3 or 5 bit is replaced by the word in the table having an index corresponding to the word of 3 or 5 bits. In summary, each word of 8 bits is replaced by a word of 10 bits based on a dictionary, the given tables.
Decoding proceeds with the reverse process. It may be understood that due to the dictionary nature of the encoding, a single bit error in the transmitted encoded word of 10 bits may lead to the decoding of a completely different word of 8 bits. Moreover, if the encoded word of 10 bits does not pertain to the dictionary due to a bit error in the transmission, the decoding is typically based on the closest word of 10 bits pertaining to the dictionary. The word close should be understood here in respect to a bit distance like the Hamming distance for example. But due to the dictionary process, the fact that two 10 bits encoded words are close, does not mean that their corresponding decoded 8 bits words would also be close. This aspect explains how the 8B1OB decoding may increase the BER of the received bitstream. Measure of the BER at different stages of the decoding shows that the main effect of the convolutional encoding is to correct errors introduced by the DC free line coding with nearly no effect on the transmission errors. The BER obtained at the output of the Viterbi decoder is roughly the same as the one obtained at the output of the demodulator.
Figure 5 illustrates a decoder according to an embodiment of the invention. It corresponds to the decoder of Fig. 4 where the DC free line decoder 402 and the Viterbi decoder 403 have been replaced by a joint decoding module 502. By combining the Viterbi decoding and the DC free line decoding it is possible to decrease the number of errors introduced by the DC free line decoding. Namely, it is possible to correct some bit errors in the encoded 10 bits words before applying the dictionary and therefore to decrease the effect of the dictionary decoding on the BER. Details on the functioning of the combined Viterbi decoder will be now explained.
Figure 6 illustrates the trellis used in the described embodiment of the invention. It corresponds to the convolutional code [7 5] combined with the DC free line code 8B1OB. Instead of considering the FSM with states defined by the two last bits of the register, we consider for the joint Viterbi decoder the ESM with states defined by four successive states according to possible path in the trellis along with the associated sequence of 8 bits produced by the convolutional code. Namely, the states of the FSM correspond to the following 8 bits word produced by the convolutional code: 00000000, 00000011, 00001110, 00001101 10101010.
Accordingly, the branch metric computation of equation (1) becomes for the joint Viterbi decoder when RDS = 1: BM00000000_RD_plus = SUM((r -BblOb_plus(00000000))2) BM00000011_RD_plus = SUM((r -BblOb_plus(00000011))2) BM0000111O_RD_plus = SUM((r -SblOb_plus(00001110))2) (3) BM000011O1_RD_plus = SUM((r -BblOb_plus(00001101))2) BM1O1O1O1O_RD_plus = SUM((r -BblOb_plus(10101010))2) Wbere SblOb_plus(00000000) represents the 8B1OB encoding of word 00000000 with a running digital sum (RDS) value of 1, and r is a received sequence of 10 bits, and SUM((r -BblOb_plus(00000000))2) represents the sum of the square, bitwise of the bit sequence r BblOb_plus(00000000) which is the bitwise difference of received sequence r and encoded sequence SblOb_plus(00000000). Namely, BM00000000_RD..plus represents the number of bits that differ between rand BblOb_pius(00000000).
The value of the running digital sum should be taken into account because the joint Viterbi decoder works directly on the received 8B1OB encoded sequences.
Similarly, if the running digital sum has the value -1, the branch metrics correspond to the following equations: BM00000000_RD_minus = SUM((r -BblOb_minus(00000000D2) BM00000011_RD_minus = SUM((r -BblOb_minus(00000011))2) BM0000111O_RD_minus = SUM((r -BblOb_minus(00001110D2) BM000011O1_RD_minus = SUM((r -BblOb_minus(000011O1))2) BM1O1O1O1D_RD_minus = SUM((r -BblOb_minus(10101010D2) In other words, the joint Viterbi decoder works with states corresponding to possible sequences of bits at the output of the convolutional encoder, the size of the sequence being the size used as input of the DC free line encoder.
The branch metric computes, for each state, a distance between the received sequence, having the size corresponding to the size used as output of the DC free line encoder, and the result of the DC free line encoding of said state. Any kind of distance may be used in the branch metric computation, like the Euclidian one or the Hamming one.
Then, once the branch metrics have been computed, a cumulative branch metric is computed accordingly. As the branch metrics are computed for 8 bits possible sequences, the cumulative branch metrics is also computed over possible 8 bits path to go from a state to another through the illustrated trellis.
For example, to go from the state 00 to the same state 00, the possible sequences are: 00000000, 00111011, 11101100, and 11010111. For each sequence we take the minimum of the branch metric obtained for RDS equal to 1 and for RDS equal to -1. In our example, it corresponds to the equations: (CBM(1,t + 8) = min(CBM(1,t) + min(BM00000000_RD_pliis,BM00000000_RD_ininus),...) J CBM(2, t + 8) = min(CBM(1, t) + min(BM0000111O_RD_plus, BM0000111O_RD_ininus),...) CBM(3,t + 8) = min(CBM(1,t) + min(BM00000011_RD_plus,BM00000011_RD_ininus),...) CBM(4, t + 8) = min(CBM(1, t) + niin(BM11O11O1O_RD.plus, BM11O11O1O_RD_ininus),...) Figure 7 illustrates the algorithm used to compute the branch metric in the described embodiment of the invention. The step 701 starts the algorithm.
Step 702 performs the bipolar demodulation in order to convert the +1 and -1 modulated signals into bits (0 and 1). In step 703, the bitstream is divided into N sequences of 10 bits, each sequence is noted C1. In step 704, all 8 bits sequences, noted Lk, belonging to the convolutional code trellis are determined.
Then, the associated SB1OB encoded 10 bits sequences, noted Sk, are determined for each 8 bits sequence Lk. Sk sequences are determined for all possible values of the running digital sum RDS, typically value land -1.
In step 705, a metric is computed between each C1 and all the 5k For example: a metric is computed between C1 and S1, C1 and 2, c1 and S3 C1 and S1, C1 and 2, Q and S3, ... etc. The metric could be the hamming distance metric, the Euclidean distance metric or any other similar metrics comparing two bits sequences.
Figure 8 illustrates the algorithm used to compute the cumulative branch metric values in the described embodiment of the invention. The step 801 starts the algorithm. In the step 802, the cumulative branch metric CBM(s,i) for each state at the initial time, corresponding to i = 0, is initialized. Here, s corresponds to the state of the trellis, while i corresponds to the sequence index. After the initialization step, the branch metrics are computed by using the algorithm illustrated on Figure 7 in a step 803. Update in a step 804 the cumulative branch metric CBM(s, i) for each received sequence C1 starting from: a) the previous cumulative branch metric CBM(s, if -1) and b) the branch metrics computed in 803. After computing the cumulative branch metrics of all the states of the trellis and for all the received sequences C1, a selection of one state, called final state, is performed by determining the state having the lowest cumulative branch metric in step 805. The said final state is used to perform the classical traceback in order to determine the most likely uncoded sequence corresponding to the sequence encoded by transmitter.
Figure 9 is a schematic block diagram of a computing device 900 for implementation of one or more embodiments of the invention. The computing device 900 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 900 comprises a communication bus connected to: -a central processing unit 901, such as a microprocessor, denoted CPU; -a random access memory 902, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method for encoding or decoding at least part of an image according to embodiments of the invention, the memory capacity thereof can be expanded by an optional RAM connected to an expansion port for example; -a read only memory 903, denoted ROM, for storing computer programs for implementing embodiments of the invention; -a network interface 904 is typically connected to a communication network over which digital data to be processed are transmitted or received.
The network interface 904 can be a single network interface, or composed of a set of different network interfaces (for instance wired and wireless interfaces, or different kinds of wired or wireless interfaces). Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 901; -a user interface 905 may be used for receiving inputs from a user or to display information to a user; -a hard disk 906 denoted HD may be provided as a mass storage device; -an I/O module 907 may be used for receiving/sending data from/to external devices such as a video source or display.
The executable code may be stored either in read only memory 903, on the hard disk 906 or on a removable digital medium such as for example a disk.
According to a variant, the executable code of the programs can be received by means of a communication network, via the network interface 904, in order to be stored in one of the storage means of the communication device 900, such as the hard disk 906, before being executed.
The central processing unit 901 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 901 is capable of executing instructions from main RAM memory 902 relating to a software application after those instructions have been loaded from the program ROM 903 or the hard-disc (HD) 906 for example. Such a software application, when executed by the CPU 901, causes the steps of the flowcharts shown in Figures 7 and 8 to be performed.
Any step of the algorithm shown in Figure 7 and 8 may be implemented in software by execution of a set of instructions or program by a programmable computing machine, such as a PC ("Personal Computer"), a DSP ("Digital Signal Processor") or a microcontroller; or else implemented in hardware by a machine or a dedicated component, such as an FPGA ("Field-Programmable Gate Array") or an ASIC ("Application-Specific Integrated Circuit").
The foregoing embodiment has been described using a Viterbi decoder.
But the person skilled in the art should understand that the invention is not limited to the Viterbi decoder. In particular, all implementation of a maximum likelihood decoder (ML decoder) may be used the same way, considering that the Viterbi algorithm is a particular implementation of maximum likelihood decoders. These maximum likelihood decoders are still computing branch metrics and cumulative branch metrics to determine the sequence to get the decoded bitstream but without the early path elimination in the trellis typically associated to the Viterbi algorithm.
Some embodiments may be contemplated with an aim at reducing the computation burden at the possible cost of a slight degradation of the decoding reliability.
In an embodiment, the received sequence may be compared to only a subset of all the states of the trellis. Namely, the plurality of allowed words of N bits consists in a subset of all possible words of N bits corresponding to the encoding of one and only one word of K bits. Some error corrections which may have been detected will be missed, but the decoding still proves to be effective.
In another embodiment, the branch metric may be computed only on a subset of the received words of N bits. For instance, it may be computed for one word out of two. The cumulative branch metric will be computed by summing the calculated branch metrics. While being less accurate, this method is still effective.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.
GB1403573.7A 2014-02-28 2014-02-28 Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code Active GB2523586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1403573.7A GB2523586B (en) 2014-02-28 2014-02-28 Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1403573.7A GB2523586B (en) 2014-02-28 2014-02-28 Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code

Publications (3)

Publication Number Publication Date
GB201403573D0 GB201403573D0 (en) 2014-04-16
GB2523586A true GB2523586A (en) 2015-09-02
GB2523586B GB2523586B (en) 2016-06-15

Family

ID=50490581

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1403573.7A Active GB2523586B (en) 2014-02-28 2014-02-28 Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code

Country Status (1)

Country Link
GB (1) GB2523586B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539925A (en) * 2015-07-01 2017-01-04 Canon Kk DC free line coding robust against transmission error
US10826536B1 (en) 2019-10-03 2020-11-03 International Business Machines Corporation Inter-chip data transmission system using single-ended transceivers

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115642924B (en) * 2022-11-01 2024-02-27 杭州海宴科技有限公司 Efficient QR-TPC decoding method and decoder

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5014276A (en) * 1989-02-06 1991-05-07 Scientific Atlanta, Inc. Convolutional encoder and sequential decoder with parallel architecture and block coding properties
US7688902B1 (en) * 2003-04-16 2010-03-30 Marvell International Ltd. Joint space-time block decoding and viterbi decoding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5014276A (en) * 1989-02-06 1991-05-07 Scientific Atlanta, Inc. Convolutional encoder and sequential decoder with parallel architecture and block coding properties
US7688902B1 (en) * 2003-04-16 2010-03-30 Marvell International Ltd. Joint space-time block decoding and viterbi decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE Transactions on Wireless Communications, vol. 5, no. 3, March 2006, "Correction of extrinsic information for iterative decoding in a serially concatenated multiuser DS-CDMA system", Pei Xiao & Erik Strom *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539925A (en) * 2015-07-01 2017-01-04 Canon Kk DC free line coding robust against transmission error
GB2539925B (en) * 2015-07-01 2017-09-20 Canon Kk DC free line coding robust against transmission error
US10826536B1 (en) 2019-10-03 2020-11-03 International Business Machines Corporation Inter-chip data transmission system using single-ended transceivers

Also Published As

Publication number Publication date
GB2523586B (en) 2016-06-15
GB201403573D0 (en) 2014-04-16

Similar Documents

Publication Publication Date Title
EP0671817A1 (en) Soft symbol decoding for use in an MLSE-equaliser or convolutional decoder
US6148431A (en) Add compare select circuit and method implementing a viterbi algorithm
US8127216B2 (en) Reduced state soft output processing
JP2001036417A (en) Device, method and medium for correcting and encoding error, and device, method and medium for decoding error correction code
WO2018179246A1 (en) Check bit concatenated polar codes
JP5764670B2 (en) Decoding method and decoder
JP2018019401A (en) Reed-Solomon decoder and decoding method
US9385757B1 (en) Systems and methods for using a non-binary soft output viterbi algorithm
GB2523586A (en) Method and a device for decoding a bitstream encoded with an outer convolutional code and an inner block code
US8009773B1 (en) Low complexity implementation of a Viterbi decoder with near optimal performance
Katta Design of convolutional encoder and Viterbi decoder using MATLAB
JP2008118327A (en) Viterbi decoding method
KR101212856B1 (en) Method and apparatus for decoding data in communication system
US11387849B2 (en) Information decoder for polar codes
TWI487291B (en) Cyclic code decoder and method thereof
US20070201586A1 (en) Multi-rate viterbi decoder
US10826533B2 (en) Methods, systems, and computer-readable media for decoding a cyclic code
US9467174B2 (en) Low complexity high-order syndrome calculator for block codes and method of calculating high-order syndrome
CN110710113B (en) Path measurement unit in sequence detector and path measurement method
US8644432B2 (en) Viterbi decoder for decoding convolutionally encoded data stream
US8156412B2 (en) Tree decoding method for decoding linear block codes
Abubeker et al. Maximum likelihood DE coding of convolutional codes using viterbi algorithm with improved error correction capability
KR101134806B1 (en) Method for decoding code
CN114499548B (en) Decoding method, device and storage medium
KR0169775B1 (en) Synchro-and non-synchro detecting method and apparatus for vitervi decoder