WO2010064496A1 - Decoding method and decoding device - Google Patents

Decoding method and decoding device Download PDF

Info

Publication number
WO2010064496A1
WO2010064496A1 PCT/JP2009/067947 JP2009067947W WO2010064496A1 WO 2010064496 A1 WO2010064496 A1 WO 2010064496A1 JP 2009067947 W JP2009067947 W JP 2009067947W WO 2010064496 A1 WO2010064496 A1 WO 2010064496A1
Authority
WO
WIPO (PCT)
Prior art keywords
branchwords
block
encoder
received
state
Prior art date
Application number
PCT/JP2009/067947
Other languages
French (fr)
Inventor
Dominic Wong
Dobrica Vasic
Original Assignee
Nec Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2008906238A external-priority patent/AU2008906238A0/en
Application filed by Nec Corporation filed Critical Nec Corporation
Priority to US13/131,954 priority Critical patent/US8489972B2/en
Priority to JP2011524036A priority patent/JP5370487B2/en
Priority to EP09830264.9A priority patent/EP2361458A4/en
Priority to CN200980147990.9A priority patent/CN102282771B/en
Publication of WO2010064496A1 publication Critical patent/WO2010064496A1/en
Priority to HK12105209.2A priority patent/HK1165111A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/413Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors tail biting Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4169Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using traceback

Definitions

  • the present invention relates to the decoding of general codes produced by a convolution encoder using a tail-biting convolutional code.
  • BACKGROUND ART In recent years, when information signals are communicated from a transmitter to a receiver via a communications channel, the information signals may be corrupted by noise associated with the channel. To help prevent such noise from corrupting the received information, a channel coding technique may be employed. Generally, coding which helps mitigate the effects of channel noise does so by introducing redundancy into the information to the communicators. Because of this redundancy, the likelihood that noise will corrupt communicated information is reduced.
  • Convolutional codes are a class of channel codes used to mitigate the effects of channel noise in the transmission of information. Convolutional codes are well known in the art and have been adopted as standards for certain types of communication systems.
  • One such convolutional code is known in the art as a tail-biting convolutional code. With tail-biting convolutional codes, a frame or block of information is encoded and communicated in a block-wise manner.
  • the term "tail-biting" is used to refer to the fact that the encoder begins and ends in the same encoder state. The decoder is aware that the encoder begins and ends in the same state but is unaware of the value (or identity) of that state.
  • the maximum likelihood decoder for the convolutional codes is known in the art as a Viterbi decoder.
  • the Viterbi decoder treats the problem of decoding a sequence of received symbols as a problem of finding the most likely sequence of uncorrupted symbols given an actual corrupted sequence received.
  • the maximum likelihood decoder for tail-biting convolutional code employs Viterbi decoding, but can place great demands on computational resources. Alternatively, if computational resources are minimised, the accuracy of the Viterbi decoding can suffer.
  • the present invention seeks to solve one or more of the above problems, or to improve upon those problems at least in part.
  • a decoding method of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code comprising: storing the N received branchwords in memory; successively performing Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; determining a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; first performing a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; second performing a Viterbi traceback procedure from that second encoder state at the start of the third block
  • the method may further comprise: replacing the second encoder state with the third encoder state; repeating the second performing; and outputting the derived tail biting path.
  • the sequence of branchwords in the successively performing may be formed from a logically circular reading of the N received branchwords stored in the memory.
  • S may equal T.
  • the first block may comprise S consecutive branchwords from the end of the second block of N received branchwords.
  • the third block may comprise T consecutive branchwords from the start of the second block of N received branchwords.
  • Another aspect of the invention provides a decoding device of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code, comprising: a memory storing the N received branchwords; and a data processing unit comprising: successively performing unit which successively performs Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; determining unit which determines a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; first performing unit which performs a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; second performing unit which
  • FIG 1 depicts a related convolutional encoder.
  • FIG. 2 presents a single state - transition trellis section reflecting the operation of the encoder shown in FIG. 1.
  • FIG. 3 depicts a state transition trellis showing the operation of the encoder of FIG. 1 given a particular starting state and information bits for coding.
  • FIG. 4 depicts an exemplary radio receiver system including a digital signal processor which acts to decode received branchwords produced by the encoder shown in FIG. 1.
  • FIG. 5 depicts the manner in which a block of received branchwords is stored in a memory device forming part of the radio receiver shown in FIG 4.
  • FIG. 6 is a flow chart depicting the sequence of operations performed by the digital signal processor forming part of the radio receiver shown in FIG 4 during decoding of the block of received branchwords produced by the encoder shown in FIG. 1.
  • FIG. 1 depicts an illustrative convolutional encoder having a rate of 1/2, namely for every information bit to be coded, the encoder produces two output bits (i.e. a two-bit branchword).
  • the encoder 10 comprises two single bit memory cells 12 and 14 and two adder circuits 16 and 18.
  • the memory cell 12 and adder circuits 16 and 18 receive a sequence of information bits s(i) to be encoded.
  • the memory cell 12 provides its contents to memory cell 14 with each new information bit received.
  • the encoder may be viewed as comprising an "upper” and “lower” path, each path including an adder circuit and connections to the information bit stream and one or both memory cells 12 and 14.
  • the output of the upper path of the encoder (i.e. the path which includes the adder circuit 16) comprises a first bit of a generated branchword. This output is generated by adding together the current bit and the two previous bits. If the resulting sum is odd, the adder 16 outputs a logical 1 ; if the resulting sum is even, the adder 16 outputs a logical 0.
  • the output of the "lower" path (the path which includes the adder circuit 18) comprises the second bit of the branchword. This output is generated by adding together the current bit and the bit which is two bits earlier than the current bit. Again, if the resulting sum is odd, the adder 18 outputs a logical 1; if the resulting sum is even, the adder 18 outputs a logical 0.
  • this encoder Since only three bits are used to determine an output branchword, this encoder is said to have a constraint length of three. Its memory is two. The more output bits per input bit and the longer the constraint length, the more powerful the code - that is, the more robust the code will be to channel noise. It will be appreciated that the encoder depicted in FIG 1 is exemplary only, and that in practical embodiments of the invention a greater number of memory cells and adder circuits may be used to generate a greater number of bits output for each branchword by the encoder.
  • the operation of the convolutional encoder shown in FIG. 1 may be represented conventionally by a treliis diagram such as that presented in FIG 2.
  • the trellis describes how the states of the encoder can change from one information bit time to the next.
  • the encoder state is simply the contents of the encoder memory cells at any one time read as a state "word”.
  • On both the left and right sides of the trellis are the allowable states of the encoder: 00, 01, 10 and 11.
  • the states on the left side of the trellis represent the current state of the encoder.
  • the states on the right side of the trellis represent the next state of the encoder.
  • the encoder is said to be in state 00 (which is the trellis node in the top left hand corner of the trellis).
  • state 00 which is the trellis node in the top left hand corner of the trellis.
  • the arrival of the next subsequent bit will mean that the encoder transitions to state 10. That is, with the arrival of the next bit, the bit in the memory cell 14 is replaced by the bit in the memory cell 12 (0) and the bit in the memory cell 12 is replaced by a current bit (1).
  • This transition is indicated by the diagonal line beginning at the current state 00 at the top left of the trellis and extending downwards and across to the next state 10.
  • the second state from the bottom on the left side of the trellis. With this state transition is an indication (in parenthesis) of the output branchword of the encoder - in this case, 11.
  • the trellis diagram indicates all allowable transitions in state by the encoder. According to the diagram shown in FIG 2, for example, the encoder cannot transition from state 00 to state 11 (not the absence of a line connecting 00 on the left with 11 on the right). This may be seen directly from the fact that states change one bit at a time. Multiple trellises of the type shown in FIG. 2 are concatenated together (as is conventional) to form a trellis indicating a sequence of encoder state transitions over time.
  • the trellis shown in FIG 3 represents the encoding of the information bit sequence 101100... by an encoder starting in state 00.
  • the trellis comprises six individual trellis sections of the type shown in FIG. 2.
  • the input bit stream causes the change of states indicated by the solid line, starting with state 00; 10, 01, 10, 11, 01, 00....
  • Discrete time i is indicated across the top of the trellis.
  • the encoder outputs the branchwords shown in parentheses: 11, 01, 00, 10, 10, 11....
  • Each of the state transitions indicated by the solid line traversing a trellis section is an allowed transition corresponding to a given current state and an information bit to encode. Other potential allowed state transitions are shown in dashed lines.
  • Code words generated by the illustrative encoder shown in FIG 1 are communicated through a communication channel to a decoder.
  • the job of the decoder is to determine the sequence of information bits which were coded by the encoder. A determination is based on branchwords received by the decoder. Assuming a perfect communication channel and knowledge of the encoders starting state, this task is relatively straight forward.
  • the decoder employs a trellis of a type descriptive of the state transitions of the encoder current and, knowing the starting state, uses the received branchwords to dictate state transitions taken by the encoder when encoding. Based on these state transitions, the sequence of bits causing such transitions may be determined.
  • a Viterbi decoder selects the most likely path through a coder trellis given branchwords which may contain bit errors. It can do so from any of a number of starting states (assuming the decoder has no knowledge of starting state). The selection of the most likely path is made progressively, one received branchword at a time. As a result of applying the Viterbi technique to each successive, received branchword, a path metric is maintained which reflects a likelihood that a path associated with that metric is the path actually taken by the encoder.
  • a decision vector is generated which reflects for each state (at a given discrete time) which of two possible paths coming into the state is the better path.
  • the vector records the "better path” decision for each state in the trellis. Paths which are not selected as a better path are said to be in "pruned". Pruned paths will not have an effect on the final decoding of the branchwords.
  • channel symbols are corrupted by noise and interference.
  • soft received branchwords are used to calculate the branch and path metrics for the path selection. These soft received branchwords are real numbers.
  • branchword(s) in the following discussion assumes soft branch words.
  • the decision on which path to maintain and which path to prune may be represented by a single bit, as is conventional.
  • the decision vector of four bits is determined and saved in memory.
  • the Viterbi technique has been applied to the received branchwords
  • the saved decision vectors provide the basis for a related Viterbi traceback procedure. It is this traceback procedure which decodes the received branchwords. Further details of convention Viterbi decoding are presented in Clark and Cain, in Air-correction Coding for Digital Communications, Chapter 6 (1981), which is hereby incorporated by reference in its entirety.
  • FIG. 4 depicts an illustrative embodiment of a Viterbi decoder 20 forming part of a radio receiver system.
  • the decoder 20 is coupled to an antenna 22 and radio receiver circuitry 24 which receives an analogue radio signal x(t) and provides to the decoder 20 digital branchwords at discrete times c(i).
  • the decoder 20 comprises a Digital Signal Processor (DSP) 26 coupled to a Read Only Memory (ROM) 28 and Random Access Memory (RAM) 30.
  • DSP Digital Signal Processor
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the RAM 30 stores, inter alia, a buffer of N received branchwords for use by the present invention, as well as the results of Viterbi updates.
  • the decoder 20 operates to decode branchwords c(i) received from the radio communication channel.
  • branchwords are generated by an encoder employing a tail-biting convolutional code.
  • Such an encoder may be that described above with respect to FIGs. 1 and 2. Because the channel is noisy, the branchwords are imperfectly communicated. That is, the branchwords may contain one or more bit errors.
  • the decoding operation carried out by the decoder 20 attempts to extract the communicator's information from these branchwords.
  • the decoder 20 employers related Viterbi decoding to decode a block of N received branchwords produced by a convolutional encoder using a tail-biting convolutional code. However, it performs this decoding by successively performing Viterbi updates on a sequence of branchwords which is longer than the N received branchwords produced by the convolutional encoder.
  • the sequence of branchwords on which the Viterbi updates are successively performed is constructed by adding a block of branchwords to the start of the N received branchwords and another block of branchwords to the end of N received branchwords. Preferably, this is done in the manner depicted in FIG. 5.
  • the sequence of branchwords may be formed from a logically circular reading of the N received branchwords stored in the RAM 30.
  • a first block consisting of S consecutive branchwords from the N received branchwords may be read from the end of the block of N received branchwords stored in the RAM 30.
  • a block of T consecutive branchwords may be read from the start of the block of N received branchwords stored in the RAM 30.
  • the sequence of branchwords on which the Viterbi updates are successively performed can be constructed in a manner which is computationally simple to perform.
  • Each Viterbi update generates path metrics and a decision vector based on those metrics as described previously.
  • the decoder 20 makes use of the following principle. If we start accumulating branch metrics along the paths through the trellis shown in FIG. 3, the following observations hold: whenever two paths merge into one state, only the most likely path (the best path or the survivor path) needs to be retained, since for all possible extensions to these paths, the path which is currently better will always be better. For any given extension to the paths, both paths are extended by the same branch metrics. This process is described by the add-compare-select (ACS) recursion, the path with the best path metric leading to every state is determined recursively for every step in the trellis.
  • ACS add-compare-select
  • the decoder 20 successively performs Viterbi updates on the sequence of N+S+T branchwords which have been read from the RAM 30 in the manner depicted in FIG 5.
  • the Viterbi updates generate path metrics which are updated for each branchword until the end of the sequence of N+S+T branchwords is reached.
  • the decoder 20 determines a first encoder state most likely to have generated the final branchword in the sequence from the best path metric.
  • a Viterbi traceback procedure is then performed from that first encoder state to determine a second encoder state at the start of the third block of branchwords 44.
  • a second Viterbi traceback procedure is then performed from the end of the second block 42 of branchwords to the start of the second block 42 of branchwords in order to determine a third encoder state. If the second and third encoder states are found to be identical (i.e. if the starting state and the ending state of the Viterbi traceback procedure performed on the second block 42 of branchwords) are found to be identical, then the best tail-biting path has been found by the decoder 20.
  • the decoder 20 can repeat the Viterbi traceback procedure performed on the second block 42 of branchwords by replacing the second encoder state with the third encoder state, and repeating the traceback procedure.
  • the derived tail-biting path is then output. It has been found that further iterations of the Viterbi traceback procedure are generally not required.
  • the values of S and T are identical, that is, the first and third blocks consist of the same number of branchwords forming a subset of the N received branchwords stored in the RAM 30. In other embodiments of the invention though, the first and third block of branchwords may include differing numbers of branchwords.
  • the above described method of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code advantageously provides more reliable path metrics for use during a Viterbi traceback procedure by lengthening the sequence of branchwords on which Viterbi updates are performed. It has been found that the best tail-biting path is found using this method by performing only one traceback procedure on the second block of N received branchwords, or at most two traceback procedures. Moreover, the manner in which the sequence of branchwords is constructed as depicted in FIG 5 is computationally very simple to perform, so that the improved accuracy of the above described method is achieved with minimal additional computational resources.
  • the present invention is implemented primarily using digital signal processing, in other embodiments the present invention may be implemented primarily in hardware using, for example, hardware components such as an application specific integrated circuit.
  • the present invention may also be implemented primarily using computer software or a combination of both hardware and software. While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, the present invention is not limited to these embodiments. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims. This application is based upon and claims the benefit of priority from Australian provisional patent application No. 2008906238, filed on 2 December, 2008, the disclosure of which is incorporated herein in its entirety by reference.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The decoding method comprises storing, successively performing, determining, first performing, second performing, and outputting. The storing stores the N received branchwords in memory. The successively performing performs Viterbi updates on a sequence of branchwords. The determining determines a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric. The first performing performs a Viterbi traceback procedure from the first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords. The second performing performs a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords. The outputting outputs a derived tail-biting path, if the second and third encoder states are identical.

Description

DESCRIPTION DECODING METHOD AND DECODING DEVICE
TECHNICAL FIELD The present invention relates to the decoding of general codes produced by a convolution encoder using a tail-biting convolutional code.
BACKGROUND ART In recent years, when information signals are communicated from a transmitter to a receiver via a communications channel, the information signals may be corrupted by noise associated with the channel. To help prevent such noise from corrupting the received information, a channel coding technique may be employed. Generally, coding which helps mitigate the effects of channel noise does so by introducing redundancy into the information to the communicators. Because of this redundancy, the likelihood that noise will corrupt communicated information is reduced.
Convolutional codes are a class of channel codes used to mitigate the effects of channel noise in the transmission of information. Convolutional codes are well known in the art and have been adopted as standards for certain types of communication systems. One such convolutional code is known in the art as a tail-biting convolutional code. With tail-biting convolutional codes, a frame or block of information is encoded and communicated in a block-wise manner. The term "tail-biting" is used to refer to the fact that the encoder begins and ends in the same encoder state. The decoder is aware that the encoder begins and ends in the same state but is unaware of the value (or identity) of that state. The maximum likelihood decoder for the convolutional codes is known in the art as a Viterbi decoder. As is well known, the Viterbi decoder treats the problem of decoding a sequence of received symbols as a problem of finding the most likely sequence of uncorrupted symbols given an actual corrupted sequence received. The maximum likelihood decoder for tail-biting convolutional code employs Viterbi decoding, but can place great demands on computational resources. Alternatively, if computational resources are minimised, the accuracy of the Viterbi decoding can suffer.
DISCLOSURE OF INVENTION The present invention seeks to solve one or more of the above problems, or to improve upon those problems at least in part.
In one embodiment of the invention, there is provided a decoding method of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code, comprising: storing the N received branchwords in memory; successively performing Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; determining a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; first performing a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; second performing a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and outputting a derived tail-biting path, if the second and third encoder states are identical.
In another embodiment of the invention, if the second and third encoder states are not identical, then the method may further comprise: replacing the second encoder state with the third encoder state; repeating the second performing; and outputting the derived tail biting path.
Conveniently, the sequence of branchwords in the successively performing may be formed from a logically circular reading of the N received branchwords stored in the memory.
In still another embodiment of the invention, S may equal T. Conveniently, the first block may comprise S consecutive branchwords from the end of the second block of N received branchwords.
Moreover, the third block may comprise T consecutive branchwords from the start of the second block of N received branchwords.
Another aspect of the invention provides a decoding device of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code, comprising: a memory storing the N received branchwords; and a data processing unit comprising: successively performing unit which successively performs Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; determining unit which determines a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; first performing unit which performs a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; second performing unit which performs a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and outputting unit which outputs a derived tail-biting path, if the second and third encoder states are identical.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the present invention will appear from the following description taken as a non limiting example with reference to the following drawings in which;
FIG 1 depicts a related convolutional encoder.
FIG. 2 presents a single state - transition trellis section reflecting the operation of the encoder shown in FIG. 1.
FIG. 3 depicts a state transition trellis showing the operation of the encoder of FIG. 1 given a particular starting state and information bits for coding.
FIG. 4 depicts an exemplary radio receiver system including a digital signal processor which acts to decode received branchwords produced by the encoder shown in FIG. 1.
FIG. 5 depicts the manner in which a block of received branchwords is stored in a memory device forming part of the radio receiver shown in FIG 4.
FIG. 6 is a flow chart depicting the sequence of operations performed by the digital signal processor forming part of the radio receiver shown in FIG 4 during decoding of the block of received branchwords produced by the encoder shown in FIG. 1.
BEST MODE FOR CARRYING OUT THE INVENTION For the purpose of clarity of the following description, identical features and steps in the drawings illustrating the related art and those illustrating the present invention will be given the same reference numbers.
FIG. 1 depicts an illustrative convolutional encoder having a rate of 1/2, namely for every information bit to be coded, the encoder produces two output bits (i.e. a two-bit branchword). The encoder 10 comprises two single bit memory cells 12 and 14 and two adder circuits 16 and 18. The memory cell 12 and adder circuits 16 and 18 receive a sequence of information bits s(i) to be encoded. The memory cell 12 provides its contents to memory cell 14 with each new information bit received. The encoder may be viewed as comprising an "upper" and "lower" path, each path including an adder circuit and connections to the information bit stream and one or both memory cells 12 and 14.
The output of the upper path of the encoder (i.e. the path which includes the adder circuit 16) comprises a first bit of a generated branchword. This output is generated by adding together the current bit and the two previous bits. If the resulting sum is odd, the adder 16 outputs a logical 1 ; if the resulting sum is even, the adder 16 outputs a logical 0. The output of the "lower" path (the path which includes the adder circuit 18) comprises the second bit of the branchword. This output is generated by adding together the current bit and the bit which is two bits earlier than the current bit. Again, if the resulting sum is odd, the adder 18 outputs a logical 1; if the resulting sum is even, the adder 18 outputs a logical 0. Since only three bits are used to determine an output branchword, this encoder is said to have a constraint length of three. Its memory is two. The more output bits per input bit and the longer the constraint length, the more powerful the code - that is, the more robust the code will be to channel noise. It will be appreciated that the encoder depicted in FIG 1 is exemplary only, and that in practical embodiments of the invention a greater number of memory cells and adder circuits may be used to generate a greater number of bits output for each branchword by the encoder.
The operation of the convolutional encoder shown in FIG. 1 may be represented conventionally by a treliis diagram such as that presented in FIG 2. The trellis describes how the states of the encoder can change from one information bit time to the next. The encoder state is simply the contents of the encoder memory cells at any one time read as a state "word". On both the left and right sides of the trellis are the allowable states of the encoder: 00, 01, 10 and 11. The states on the left side of the trellis represent the current state of the encoder. The states on the right side of the trellis represent the next state of the encoder.
For example, regardless of the value of the current bit, if the two previous bits are both 0 (such that the contents of the memory cells 12 and 14 are both O)5 the encoder is said to be in state 00 (which is the trellis node in the top left hand corner of the trellis). If the current bit is a 1, the arrival of the next subsequent bit will mean that the encoder transitions to state 10. That is, with the arrival of the next bit, the bit in the memory cell 14 is replaced by the bit in the memory cell 12 (0) and the bit in the memory cell 12 is replaced by a current bit (1). This transition is indicated by the diagonal line beginning at the current state 00 at the top left of the trellis and extending downwards and across to the next state 10. The second state from the bottom on the left side of the trellis. With this state transition is an indication (in parenthesis) of the output branchword of the encoder - in this case, 11.
If the current bit was 0 instead of 1, the arrival of the next subsequent bit would mean that the encoder transitions to the same state, 00 (as shown in the horizontal line across the top of the trellis). The trellis diagram indicates all allowable transitions in state by the encoder. According to the diagram shown in FIG 2, for example, the encoder cannot transition from state 00 to state 11 (not the absence of a line connecting 00 on the left with 11 on the right). This may be seen directly from the fact that states change one bit at a time. Multiple trellises of the type shown in FIG. 2 are concatenated together (as is conventional) to form a trellis indicating a sequence of encoder state transitions over time. The trellis shown in FIG 3 represents the encoding of the information bit sequence 101100... by an encoder starting in state 00. The trellis comprises six individual trellis sections of the type shown in FIG. 2. In the example shown in FIG 3, the input bit stream causes the change of states indicated by the solid line, starting with state 00; 10, 01, 10, 11, 01, 00.... Discrete time i is indicated across the top of the trellis. The encoder outputs the branchwords shown in parentheses: 11, 01, 00, 10, 10, 11.... Each of the state transitions indicated by the solid line traversing a trellis section is an allowed transition corresponding to a given current state and an information bit to encode. Other potential allowed state transitions are shown in dashed lines.
As can be seen from FIG. 3, for any given state in a trellis at a particular moment in time, there are two predecessor states from which a transition to the given state can possible occur. This can be seen from either FIG 2 or 3 where the state on the right side of a trellis section is associated with two states on the left side of the section by two transition paths. Moreover, given a particular starting state, any particular stream of information bits to be encoded will result in a unique path through the trellis. These two points provide the basis for the application of Viterbi decoding of branchwords produced by a convolutional encoder.
Code words generated by the illustrative encoder shown in FIG 1 are communicated through a communication channel to a decoder. The job of the decoder is to determine the sequence of information bits which were coded by the encoder. A determination is based on branchwords received by the decoder. Assuming a perfect communication channel and knowledge of the encoders starting state, this task is relatively straight forward. The decoder employs a trellis of a type descriptive of the state transitions of the encoder current and, knowing the starting state, uses the received branchwords to dictate state transitions taken by the encoder when encoding. Based on these state transitions, the sequence of bits causing such transitions may be determined.
As a general matter, perfect communication channels are not encountered in the real world. Therefore, real decoders must be able to cope with the fact that some of the branchwords received contain bit errors. For example, while the encoder may generate a branchword 00, the decoder may receive a branchword 01. Thus, the decoder can be misled in its knowledge of the sequence of states taken by the encoder. Contrary to a related Viterbi encoder where the encoder's starting and ending states are always equal to zero, a tail-biting viterbi decoder is not aware of the encoder's starting state and ending states. The only knowledge that a tail-biting viterbi decoder has is the encoder's starting and ending states shall be identical ideally. With imperfect knowledge of the encoder's starting state and the sequence of subsequent states, however, the decoder may make errors in determining encoder information bits.
As is well known in the art, the problem of channel errors are mitigated with the use of Viterbi decoder. A Viterbi decoder selects the most likely path through a coder trellis given branchwords which may contain bit errors. It can do so from any of a number of starting states (assuming the decoder has no knowledge of starting state). The selection of the most likely path is made progressively, one received branchword at a time. As a result of applying the Viterbi technique to each successive, received branchword, a path metric is maintained which reflects a likelihood that a path associated with that metric is the path actually taken by the encoder.
As part of the determination of the best estimate of the path taken by the encoder, a decision vector is generated which reflects for each state (at a given discrete time) which of two possible paths coming into the state is the better path. The vector records the "better path" decision for each state in the trellis. Paths which are not selected as a better path are said to be in "pruned". Pruned paths will not have an effect on the final decoding of the branchwords. In a real world environment, channel symbols are corrupted by noise and interference. In order to provide more decoding information to the Viterbi decoder, soft received branchwords are used to calculate the branch and path metrics for the path selection. These soft received branchwords are real numbers. The term branchword(s) in the following discussion assumes soft branch words.
There are at most two paths which may enter a state. Therefore, the decision on which path to maintain and which path to prune may be represented by a single bit, as is conventional. In the illustrative embodiment of the encoder shown in FIGs. 1 and 2, there are four states at each discrete point in time. Thus, at each such time, a decision vector of four bits is determined and saved in memory. Once the Viterbi technique has been applied to the received branchwords, the saved decision vectors provide the basis for a related Viterbi traceback procedure. It is this traceback procedure which decodes the received branchwords. Further details of convention Viterbi decoding are presented in Clark and Cain, in Air-correction Coding for Digital Communications, Chapter 6 (1981), which is hereby incorporated by reference in its entirety.
FIG. 4 depicts an illustrative embodiment of a Viterbi decoder 20 forming part of a radio receiver system. The decoder 20 is coupled to an antenna 22 and radio receiver circuitry 24 which receives an analogue radio signal x(t) and provides to the decoder 20 digital branchwords at discrete times c(i). The decoder 20 comprises a Digital Signal Processor (DSP) 26 coupled to a Read Only Memory (ROM) 28 and Random Access Memory (RAM) 30. The RAM 30 stores, inter alia, a buffer of N received branchwords for use by the present invention, as well as the results of Viterbi updates. The decoder 20 operates to decode branchwords c(i) received from the radio communication channel. These branchwords are generated by an encoder employing a tail-biting convolutional code. Such an encoder may be that described above with respect to FIGs. 1 and 2. Because the channel is noisy, the branchwords are imperfectly communicated. That is, the branchwords may contain one or more bit errors. The decoding operation carried out by the decoder 20 attempts to extract the communicator's information from these branchwords.
The decoder 20 employers related Viterbi decoding to decode a block of N received branchwords produced by a convolutional encoder using a tail-biting convolutional code. However, it performs this decoding by successively performing Viterbi updates on a sequence of branchwords which is longer than the N received branchwords produced by the convolutional encoder. The sequence of branchwords on which the Viterbi updates are successively performed is constructed by adding a block of branchwords to the start of the N received branchwords and another block of branchwords to the end of N received branchwords. Preferably, this is done in the manner depicted in FIG. 5. As can be seen in this figure, the sequence of branchwords may be formed from a logically circular reading of the N received branchwords stored in the RAM 30. A first block consisting of S consecutive branchwords from the N received branchwords may be read from the end of the block of N received branchwords stored in the RAM 30. Similarly, a block of T consecutive branchwords may be read from the start of the block of N received branchwords stored in the RAM 30. By first reading a first block 40 of S consecutive branchwords from the N received branchwords, then reading a second block 42 consisting of N received branchwords and finally reading a third block 44 consisting of T consecutive branchwords from the start of the second block of N received branchwords, the sequence of branchwords on which the Viterbi updates are successively performed can be constructed in a manner which is computationally simple to perform. Each Viterbi update generates path metrics and a decision vector based on those metrics as described previously.
In the present context of Viterbi decoding, the decoder 20 makes use of the following principle. If we start accumulating branch metrics along the paths through the trellis shown in FIG. 3, the following observations hold: whenever two paths merge into one state, only the most likely path (the best path or the survivor path) needs to be retained, since for all possible extensions to these paths, the path which is currently better will always be better. For any given extension to the paths, both paths are extended by the same branch metrics. This process is described by the add-compare-select (ACS) recursion, the path with the best path metric leading to every state is determined recursively for every step in the trellis.
Accordingly, as shown in FIG. 6, the decoder 20 successively performs Viterbi updates on the sequence of N+S+T branchwords which have been read from the RAM 30 in the manner depicted in FIG 5. The Viterbi updates generate path metrics which are updated for each branchword until the end of the sequence of N+S+T branchwords is reached.
At this point, the decoder 20 determines a first encoder state most likely to have generated the final branchword in the sequence from the best path metric. A Viterbi traceback procedure is then performed from that first encoder state to determine a second encoder state at the start of the third block of branchwords 44. Starting from that second encoder state, a second Viterbi traceback procedure is then performed from the end of the second block 42 of branchwords to the start of the second block 42 of branchwords in order to determine a third encoder state. If the second and third encoder states are found to be identical (i.e. if the starting state and the ending state of the Viterbi traceback procedure performed on the second block 42 of branchwords) are found to be identical, then the best tail-biting path has been found by the decoder 20.
If the second and third encoder states are not identical then optionally the decoder 20 can repeat the Viterbi traceback procedure performed on the second block 42 of branchwords by replacing the second encoder state with the third encoder state, and repeating the traceback procedure. The derived tail-biting path is then output. It has been found that further iterations of the Viterbi traceback procedure are generally not required. Conveniently, the values of S and T are identical, that is, the first and third blocks consist of the same number of branchwords forming a subset of the N received branchwords stored in the RAM 30. In other embodiments of the invention though, the first and third block of branchwords may include differing numbers of branchwords.
The above described method of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code advantageously provides more reliable path metrics for use during a Viterbi traceback procedure by lengthening the sequence of branchwords on which Viterbi updates are performed. It has been found that the best tail-biting path is found using this method by performing only one traceback procedure on the second block of N received branchwords, or at most two traceback procedures. Moreover, the manner in which the sequence of branchwords is constructed as depicted in FIG 5 is computationally very simple to perform, so that the improved accuracy of the above described method is achieved with minimal additional computational resources.
Although in the above described embodiments the invention is implemented primarily using digital signal processing, in other embodiments the present invention may be implemented primarily in hardware using, for example, hardware components such as an application specific integrated circuit. The present invention may also be implemented primarily using computer software or a combination of both hardware and software. While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, the present invention is not limited to these embodiments. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the claims. This application is based upon and claims the benefit of priority from Australian provisional patent application No. 2008906238, filed on 2 December, 2008, the disclosure of which is incorporated herein in its entirety by reference.
INDUSTRIAL APPLICABILITY According to the present invention, it is possible to provide a method of decoding tail-biting convolutional codes employing Viterbi decoding which minimises the demands placed upon memory storage and computational resources, while optimising the accuracy of such decoding.

Claims

1. A decoding method of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code, comprising: storing the N received branchwords in memory; successively performing Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords, and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; determining a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; first performing a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; second performing a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and outputting a derived tail-biting path, if the second and third encoder states are identical.
2. The decoding method according to claim 1, wherein if the second and third encoder states are not identical, then the method further comprises: replacing the second encoder state with the third encoder state; repeating the second performing; and outputting the derived tail biting path.
3. The decoding method according to claim 1, wherein the sequence of branchwords in the successively performing is formed from a logically circular reading of the N received branchwords stored in the memory.
4. The decoding method according to claim 1, wherein S equals T.
5. The decoding method according to claim 1 , wherein the first block comprises S consecutive branchwords from the end of the second block of N received branchwords.
6. The decoding method according to claim 1, wherein the third block comprises T consecutive branchwords from the start of the second block of N received branchwords.
7. A decoding device of decoding N received branchwords produced by a convolutional encoder using a tail-biting convolutional code, comprising: a memory storing the N received branchwords; and a data processing unit comprising: a successive performing unit which successively performs Viterbi updates on a sequence of branchwords, the sequence comprising a first block comprising S consecutive branchwords from the N received branchwords, a second block comprising the N received branchwords and a third block comprising T consecutive branchwords from the N received branchwords, where S and T are less than N, and where the Viterbi updates generate updated path metrics; a determining unit which determines a first encoder state at the end of the third block most likely to have generated the final branchword in the sequence from the best path metric; a first performing unit which performs a Viterbi traceback procedure from that first encoder state at the end of the third block to determine a second encoder state at the start of the third block of branchwords; a second performing unit which performs a Viterbi traceback procedure from that second encoder state at the start of the third block to determine a third encoder state at the start of the second block of branchwords; and an outputting unit which outputs a derived tail-biting path, if the second and third encoder states are identical.
PCT/JP2009/067947 2008-12-02 2009-10-09 Decoding method and decoding device WO2010064496A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/131,954 US8489972B2 (en) 2008-12-02 2009-10-09 Decoding method and decoding device
JP2011524036A JP5370487B2 (en) 2008-12-02 2009-10-09 Decoding method and decoding apparatus
EP09830264.9A EP2361458A4 (en) 2008-12-02 2009-10-09 Decoding method and decoding device
CN200980147990.9A CN102282771B (en) 2008-12-02 2009-10-09 Decoding method and decoding device
HK12105209.2A HK1165111A1 (en) 2008-12-02 2012-05-31 Decoding method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2008906238 2008-12-02
AU2008906238A AU2008906238A0 (en) 2008-12-02 Viterbi decoder

Publications (1)

Publication Number Publication Date
WO2010064496A1 true WO2010064496A1 (en) 2010-06-10

Family

ID=42233150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/067947 WO2010064496A1 (en) 2008-12-02 2009-10-09 Decoding method and decoding device

Country Status (5)

Country Link
EP (1) EP2361458A4 (en)
JP (1) JP5370487B2 (en)
CN (1) CN102282771B (en)
HK (1) HK1165111A1 (en)
WO (1) WO2010064496A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5359538B2 (en) * 2009-05-08 2013-12-04 日本電気株式会社 Decoding device, decoding method and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001506811A (en) * 1996-09-16 2001-05-22 エリクソン インコーポレイテッド Tail biting code decoding technology
US20080250303A1 (en) 2005-08-19 2008-10-09 Su-Chang Chae Viterbi Decoder and Method Thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5349589A (en) * 1991-07-01 1994-09-20 Ericsson Ge Mobile Communications Inc. Generalized viterbi algorithm with tail-biting
CN100544213C (en) * 2005-04-25 2009-09-23 中兴通讯股份有限公司 A kind of interpretation method of tail-biting convolutional code and decoder thereof
JP5169771B2 (en) * 2008-11-27 2013-03-27 富士通株式会社 Decoder and decoding method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001506811A (en) * 1996-09-16 2001-05-22 エリクソン インコーポレイテッド Tail biting code decoding technology
US20080250303A1 (en) 2005-08-19 2008-10-09 Su-Chang Chae Viterbi Decoder and Method Thereof

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CLARK; CAIN: "Air-correction Coding for Digital Communications", 1981
COX R; SUNDBERG C: "A Circular Viterbi Algorithm for Decoding Tailbiting Convolutional Codes", PROCEEDINGS OF 43TH IEEE VEHICULAR TECHNOLOGY CONFERENCE, 18 May 1993 (1993-05-18), pages 104 - 107
See also references of EP2361458A4 *
SHAO, R.Y. ET AL.: "Two decoding algorithms for tailbiting codes, Communications", IEEE TRANSACTIONS ON, vol. 51, no. ISSUE:, October 2003 (2003-10-01), pages 1658 - 1665, XP011102444 *
TADASHI SATO ET AL.: "Complexity Reduction Method of Suboptimal Maximum Likelihood Decoding Algorithms for Tail-Biting Convolutional Codes", IEICE TECHNICAL REPORT. INFORMATION THEORY, vol. 104, no. 229, 22 July 2004 (2004-07-22), pages 41 - 46, XP008147400 *
WANG YI-PIN ET AL.: "To bite or not to bite - a study of tail bits versus tailbiting", SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC'96, TAIPEI, TAIWAN, vol. 2, 15 October 1996 (1996-10-15), pages 317 - 321

Also Published As

Publication number Publication date
EP2361458A1 (en) 2011-08-31
HK1165111A1 (en) 2012-09-28
CN102282771A (en) 2011-12-14
CN102282771B (en) 2014-10-08
EP2361458A4 (en) 2014-07-09
JP2012510735A (en) 2012-05-10
JP5370487B2 (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN1808912B (en) Error correction decoder
US5355376A (en) Circular viterbi decoder
US7765459B2 (en) Viterbi decoder and viterbi decoding method
US5802116A (en) Soft decision Viterbi decoding with large constraint lengths
US6507927B1 (en) Method and device for estimating the reliability of a decoded symbol sequence
US20070220409A1 (en) Symbol-level soft output viterbi algorithm (sova) and a simplification on sova
US8489972B2 (en) Decoding method and decoding device
EP2339757B1 (en) Power-reduced preliminary decoded bits in viterbi decoder
JP3233847B2 (en) Viterbi decoding method and Viterbi decoding circuit
JP3823731B2 (en) Error correction decoder
JP2004349901A (en) Turbo decoder and dynamic decoding method used therefor
JP5370487B2 (en) Decoding method and decoding apparatus
US6948114B2 (en) Multi-resolution Viterbi decoding technique
KR100262303B1 (en) Survivor path trace back method in decoding with viterbi algorithm and apparatus thereof
JP4295871B2 (en) Error correction decoder
JP3235333B2 (en) Viterbi decoding method and Viterbi decoding device
JPH0730438A (en) Viterbi decoding method
KR20050076426A (en) Apparatus and method for decoding using a map algorithm in mobile communication system
JP2006115534A (en) Error correction code decoding method, program thereof and apparatus thereof
JPH05335973A (en) Viterbi decoder and decoder for convolution code
Saleem et al. Design and Tradeoff Analysis of an Area Efficient Viterbi Decoder
JP3720251B2 (en) Viterbi decoder
JP2005102274A (en) Error correction decoder
Laddha et al. Implementation of Adaptive Viterbi Decoder through FPGA
JP2001186026A (en) Viterbi decoder and viterbi decoding method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980147990.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09830264

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2009830264

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13131954

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2011524036

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE