WO2007000708A1 - Viterbi decoder and decoding method thereof - Google Patents

Viterbi decoder and decoding method thereof Download PDF

Info

Publication number
WO2007000708A1
WO2007000708A1 PCT/IB2006/052071 IB2006052071W WO2007000708A1 WO 2007000708 A1 WO2007000708 A1 WO 2007000708A1 IB 2006052071 W IB2006052071 W IB 2006052071W WO 2007000708 A1 WO2007000708 A1 WO 2007000708A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
memory cells
path metric
state node
previous
Prior art date
Application number
PCT/IB2006/052071
Other languages
French (fr)
Inventor
Xia Zhu
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007000708A1 publication Critical patent/WO2007000708A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4107Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing add, compare, select [ACS] operations

Definitions

  • the present invention relates to a decoder and the decoding method thereof, specifically to a Viterbi decoder and the decoding method thereof, and more particularly to a Viterbi decoder capable of saving memory cells and the decoding method thereof.
  • Viterbi algorithm became well known for the first time as a decoding method for convolutional coding. Since then, it has been widely applied to data communication, data recording, digital signal processing and etc. This algorithm, for example, may be used to reduce recording errors in storage media, to eliminate inter-symbol interference, to improve character recognition ability and so on. Among these, Viterbi algorithm is suitable especially for performing ML (Maximum Likelihood) estimate over convolutionally coded data transmitted via band- limited channels.
  • ML Maximum Likelihood
  • the convolutional coder outputs a set of coded bits GO Gl G2 on the basis of the input bits and the current state of the shift register D.
  • Fig.2 illustrates the trellis diagram corresponding to the convolutional coder shown in Fig.l.
  • circles (state nodes) in each column represent the likely states 00, 01, 10 and 11 of the shift register, denoted as S0-S3 respectively, and the time increases from the left column to the right column.
  • a branch is formed by transfer from one state to another depends on the input bit values and the shifting direction of the shift register in Fig.l, and corresponds to the output coded bits.
  • SO the current state of the register
  • the shift register shown in Fig.l shifts rightward by one bit and the state transits to S2 (10), as indicated by the arrow directing to S2, thus the corresponding output bits are 111.
  • the trellis diagram as shown in Fig.2 covers all likely state transitions of the shift register in Fig.l.
  • Viterbi algorithm searches in the trellis diagram of Fig.2 according to the convolutionally coded data received, to find out a state transition sequence having the smallest errors with the received coded data, that is, the minimum error path.
  • the decoded data obtained along the minimum error path is the ML estimate over the convolutionally coded data.
  • Viterbi algorithm estimates the occurrence probability of each path, i.e. calculates the metric of each path, during search in the trellis diagram of Fig.2.
  • BM Brain Metrics
  • PM Pulth Metrics
  • a BM value represents the occurrence probability of a branch and it is obtained by comparing the received signal and the expected value. Its metric includes Hamming distance, Euclidean distance, Manhattan distance or the like between code words. Apparently, the smaller the metric is, the higher the occurrence probability of the branch is.
  • a PM value represents the probability for a set of symbols in the received signals to transit to the current state node. The PM value of the current state node can be calculated by using the PM value of the previous state node and the BM value between the state nodes.
  • Fig.3 illustrates an ACS (Add, Compare, Select) unit 100 for computing the current PM value of each state node in Viterbi algorithm.
  • PM INI and PMiN 2 the previous PM values of two state nodes (for example, SO and Sl) are fed into two adding units 110 respectively to be added with BM 1 and BM 2 , the BM values of their respective state transitions (S0->S2 and S1->S2).
  • comparing unit 120 the two sums of the respective BM and PM values are compared.
  • selecting unit 130 selects the smaller one as the current PM value of the state node (S2), that is,
  • PM out min(PMi N1 +BM 1; PMi N2 +BM 2 ).
  • the previous PM values of two state nodes can be used together to calculate the current PM values of the two state nodes.
  • the previous PM values of SO and Sl can be used to calculate the current PM values of SO and Sl. Therefore, two ACS units 100 sharing the same inputs can be grouped to form a butterfly unit, and thus each butterfly unit may compute the PM values of two state nodes simultaneously.
  • Viterbi algorithm calculates the current PM value of each state node based on the previous PM value of each state node by using butterfly units.
  • the state node with the smallest PM value is selected from the state nodes and the state transition sequence formed by the group of state nodes is the minimum error path.
  • One prior art implementation method of Viterbi decoding is to employ a first memory array and a second memory array, each comprising memory cells having the same number as the states (2 K 1 ) of convolutional coding, for example, each memory array comprising 4 memory cells in the case as shown in Fig.2.
  • the first memory array is used to store the previous value of each state node and the second memory array to store the calculated current PM value of each state node.
  • the current PM value of each state node in the second memory array is copied into the first memory array, to calculate the PM value of each state node in the next stage.
  • the deficiency of this Viterbi decoding implementation method resides in that two full-size memory arrays (each comprising 2 K 1 memory cells) are required. This occupies valuable physical space and it is especially true when the number of the coding states is large. In addition, after the current PM value of each state node is calculated, all these PM values need to be copied into the first memory array, which requires a large number of read and write transactions. To reduce read and write operations, the above two memory arrays are used in an alternate way in another prior art implementation method of Viterbi decoding. Specifically, in the first stage, the first memory array is used to store the previous PM value of each state node and the second memory array is used to store the current PM value of each state node.
  • the functions for the two memory arrays are exchanged, that is, the second memory array is used to store the previous PM value of each state node and the first memory array is used to store the current PM value of each state node.
  • This approach can omit read and write operations for copying the PM values, but two full-size memory arrays are still required, hence the problem of occupying valuable physical space still exists. It is, therefore, necessary to provide a novel method and apparatus for reducing the number of the occupied memory cells and read/write operations during Viterbi decoding process by arranging the memory cells efficiently and storing the PM value of each state node rationally.
  • An object of the present invention is to provide a Viterbi decoding method and apparatus which allows to reduce the number of the memory cells for storing the PM value of each state node during the Viterbi decoding process, and reduce unnecessary read and write operations for the PM values, while making the data read/write logic to be simple during the decoding process.
  • the present invention proposes a decoding method, comprising the steps of: receiving data to be decoded; determining, according to a state transition diagram for a corresponding decoding algorithm, a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the diagram; calculating and storing the current path metric of each state node according to the calculation order and the received data, wherein if the stored previous path metric of a state node is no longer useful for the calculation, the previous path metric is replaced with the current path metric of the state node; and searching for an optimal path according to the calculated path metric of each state node, so as to implement ML (Maximum Likelihood) decoding over the received data.
  • ML Maximum Likelihood
  • the invention proposes a decoder, comprising: an input unit, for receiving data to be decoded; a plurality of memory cells, for storing the path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes read from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node, so as to implement ML decoding over the received data; wherein the addressing unit addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of
  • the invention proposes a UE (User Equipment), comprising: a transmitting unit, for transmitting radio signals; a receiving unit, for receiving data; a decoder, comprising an input unit, for receiving data to be decoded; a plurality of memory cells, for storing the path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes read from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node stored in the memory cells, so as to implement ML decoding over the received
  • a UE User
  • Fig.2 illustrates a trellis diagram for the convolutional coder of Fig.1 ;
  • Fig.3 illustrates the structure of an ACS unit used in a Viterbi decoder
  • Fig.5 illustrates the structure of a butterfly unit for calculating the PM value of each state node in the state diagram shown in Fig.4;
  • Fig.6 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a first embodiment of the invention
  • Fig.7 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a second embodiment of the invention
  • Fig.9 illustrates the structure of a butterfly unit for calculating the PM value of each state node in the state diagram shown in Fig.8;
  • Fig.10 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a third embodiment of the invention
  • Fig.11 illustrates the structure of a Viterbi decoder according to the present invention.
  • the previous PM value of each state node may be used to calculate the current PM values of two state nodes.
  • the previous PM value of SO is only used to calculate the current PM values of SO and S2, and won't be used to calculate the current PM values of other state nodes.
  • the current PM value of S2 is calculated first, after the current PM value of SO is calculated, the previous PM value of SO is no longer useful, so the current PM value of SO can be stored in a memory cell for storing its previous PM value, thus to save a memory cell.
  • the previous PM value of S2 is still useful for calculating the current PM values of Sl and S3, so the calculated current PM value of S2 needs to be stored in a new memory cell, to avoid overwriting the previous PM value of S2.
  • the calculation order for the current PM value of each state node is arranged rationally according to the trellis diagram before calculating PM values, the number of the memory cells occupied during the Viterbi decoding process can be saved greatly, that is, a memory cell in which there is the previous PM value of a state node not useful for subsequent calculation, is used as much as possible to store the calculated current PM value of the state node, and then the current PM value of each state node is calculated and stored according to this calculation order.
  • the Viterbi decoding implementation method as proposed in the present invention will be introduced as follows, exemplified by decoding the convolutional codes in the state diagram of Fig.4.
  • four butterfly units are required to calculate the current PM value of each state node respectively.
  • the specific structure for each butterfly unit is shown in Fig.5.
  • the inputs for both the two ACS units are the previous PM values of S 2 , and S 2j+1 , and their respective outputs are the current PM values of S j and
  • the ACS unit IOOA for calculating the PM value of S j is denoted as ACS upper branch, and the ACS unit IOOB for calculating the PM value of S n is denoted as ACS lower branch.
  • Fig.6 illustrates the distribution and usage of the memory cells during decoding the convolutional codes shown in Fig.4 when employing the Viterbi decoding implementation method according to the first embodiment of the invention.
  • each column represents the state of each memory cell in a stage (or at an instant) during the Viterbi decoding process.
  • Each circle in a column represents a memory cell, and the reference number Sx in the circle indicates the PM value of which state node is stored in the memory cell. In this embodiment, the reference number Sx also identifies each memory cell.
  • Different columns in Fig.6 indicate the states of the memory cells in different stage during the Viterbi decoding process and the time increases from the left column to the right column. Only four stages are shown in Fig.6, wherein calculation of the current PM value of each state node in each stage needs the PM value of each state node in the previous stage.
  • FIG.6 eight memory cells with addresses 0-7 are referred as the basic memory cells.
  • the initial PM values of the eight states S0-S7 shown in Fig.4 are stored in the eight basic memory cells.
  • Three memory cells with addresses 8-10 are referred as the extended memory cells.
  • the previous PM value of a state node is useful for calculation of the current PM values of other state nodes, the calculated current PM value of the state node is stored in an extended memory cell.
  • the initial PM values for states S0-S7 are stored in the eight basic memory cells respectively.
  • the three extended cells denoted as S4', S5' and S6' are empty.
  • the above four butterfly units need the previous PM values of the eight states stored in the basic memory cells in the initial stage, to calculate the PM values in the current stage.
  • the specific procedure is as follows.
  • the lower branch of the first butterfly unit calculates the current PM value of S4, by using the previous PM values of SO and Sl in the memory cells.
  • the previous PM value of S4 is still useful for calculating the current PM values of S2 and S6, so the current PM value of S4 calculated by the butterfly unit is stored into an extended memory cell with address 8, identified with S4' as shown by the dashed line pointing to S4' in Fig.6.
  • the upper branch of the first butterfly unit then calculates the current PM value of SO, by using the previous PM values of SO and Sl in the memory cells.
  • the previous PM value of SO is no longer useful for calculating the current PM values of other state nodes, thus the current PM value of SO calculated by the butterfly unit may be stored into a memory cell with address 0, that is, the current PM value of SO will overwrite its previous PM value, as indicated by the solid line pointing to SO in Fig.6.
  • the lower branch of the second butterfly unit first calculates the current PM value of S5. Since the previous PM value of S5 is still useful for calculating the current PM values of S2 and S6, the calculated current PM value of S5 is stored into an extended memory cell with address 9, identified with S5' as shown by the dashed line pointing to S5' in Fig.6. Then, the upper branch of the second butterfly unit calculates the current PM value of Sl. The previous PM value of Sl is no longer useful, so the calculated current PM value of Sl may be stored into a memory cell with address 1, that is, the current PM value of Sl will overwrite its previous PM value, as indicated by the solid line pointing to Sl in Fig.6.
  • the lower branch of the third butterfly unit first calculates the current PM value of S6 and stores it into an extended memory cell with address 10, identified with S6', as indicated by the dashed line pointing to S6' in Fig.6. Then, the upper branch of the third butterfly calculates the current PM value of S2 and stores it into a memory cell with address 2, that is, the current PM value of S2 will overwrite its previous PM value, as indicated by the solid line pointing to S2 in Fig.6. Finally, the upper branch of the fourth butterfly unit first calculates the current PM value of S3.
  • the calculated current PM value of S3 is stored directly into a basic memory cell with address 3, as indicated by the dashed line pointing to S3 in Fig.6. Then, the lower branch of the fourth butterfly unit calculates the current PM value of S7, and stores it directly into a memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.6.
  • the above four butterfly units need the previous PM value of each state node calculated and stored in the memory cells in the first stage.
  • the specific procedure is as follows. First, the lower branch of the first butterfly unit calculates the current PM values of
  • the lower branch of the second butterfly unit first calculates the current PM value of S5 and stores it into an empty basic memory cell with address 5. Then, the upper branch of the second butterfly unit calculates the current PM value of Sl and stores it directly into the memory cell with address 1.
  • the lower branch of the third butterfly unit first calculates the current PM value of S6, by using the previous PM values of S4 and S5 stored in the extended memory cells S4' and S5', and stores it into an empty basic memory cell S6 with address 6.
  • the upper branch of the third butterfly unit calculates the current PM value of S2 and stores it into the memory cell with address 2, that is, the current PM value of S2 will overwrite its previous PM value.
  • the upper branch of the fourth butterfly unit first calculates the current PM value of S3, by using the previous PM value of S6' stored in an extended memory cell and the previous PM value of S7 stored in a basic memory cell, and stores it directly into a basic memory cell with address 3.
  • the lower branch of the fourth butterfly unit then calculates the current PM value of S7 by using the previous PM value of S6' stored in an extended memory cell and the previous PM value of S7 stored in a basic memory cell, and stores it into the memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.6.
  • the current PM values of the eight states are again stored in the basic memory cells with addresses 0-7 and the PM values in the extended memory cells are invalid now and they can be considered empty.
  • the calculation order for each butterfly unit in the second stage is substantially same as that in the first stage, with difference in that the current PM values of S4-S6 are stored into the corresponding empty cells in the basic memory cells at the second stage and the previous PM values of S4-S6 are read from the extended memory cells.
  • the calculation procedure for the third stage and the odd-index stages hereafter is exactly same as that for the first stage, and the calculation procedure for the fourth stage and the even-index stages hereafter is exactly same as that for the second stage.
  • the extended memory cells since the extended memory cells are arranged after the basic memory cells and are used to store the PM values of S4-S6, the extended memory cells may be addressed by adding a fixed offset.
  • the memory cell with address 8 for storing S4' in Fig.6 may be addressed by adding the address of the basic memory cell
  • the extended memory cells may be arranged before the basic memory cells, and at this time, the address of an extended memory cell may be addressed by subtracting the address of a basic memory cell with a fixed offset. (Embodiment 2)
  • Fig.7 illustrates the distribution and usage of the memory cells during decoding the convolutional codes shown in Fig.4 according to the Viterbi decoding implementation method of the second embodiment of the present invention.
  • the second embodiment in Fig.7 uses 11 memory cells, with difference in that the calculation orders for the four butterfly units are different, causing the PM values of Sl -S3 to be stored in the extended memory cells.
  • the above four butterfly units need the previous PM values of the eight states stored in the basic memory cells in the initial stage, to calculate the PM values in the current stage.
  • the specific procedure is as follows.
  • the upper branch of the fourth butterfly unit calculates the current PM value of S3, by using the previous PM values of S6 and S7 in the basic memory cells.
  • the previous PM value of S3 is still useful for calculating the current PM values of Sl and S5, so the calculated current PM value of S3 is stored into the extended memory cell with address 10, identified with S3' as shown by the dashed line pointing to S3' in Fig.7.
  • the lower branch of the fourth butterfly unit calculates the current PM value of S7, also by using the previous PM values of S6 and S7 in the basic memory cells.
  • the previous PM value of S7 is no longer useful for calculating the current PM values of other state nodes, so the calculated current PM value of S7 may be stored into a memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.7.
  • the upper branch of the third butterfly unit first calculates the current PM value of S2 and stores it into an extended memory cell with address 9, as indicated by S2'. Then, the lower branch of the third butterfly unit calculates the current PM value of S6 and stores it into a basic memory cell with address 6, that is, the current PM value of S6 will overwrite its previous PM value.
  • the upper branch of the second butterfly unit first calculates the current PM value of Sl and stores it into an extended memory cell with address 8, identified with Sl', as indicated by the dashed line pointing to S6' in Fig.7. Then, the lower branch of the second butterfly unit calculates the current PM value of S5 and stores it into a basic memory unit with address 5, that is, the current PM value of S5 will overwrite its previous PM value.
  • the lower branch of the first butterfly unit calculates the current PM value of S4 and stores it directly into a basic memory cell with address 4. Then, the upper branch of the first butterfly unit calculates the current PM value of SO and stores it directly into a basic memory cell with address 0.
  • the current PM values of the eight states are stored respectively in the memory cells identified with SO, S4-S7 and Sl '-S3', and the PM values in Sl -S3 are invalid now and these cells can be considered empty.
  • the above four butterfly units need the previous PM value of each state node calculated and stored in the memory cells in the first stage, to calculate the PM values at the current stage.
  • the calculation orders for the four butterfly units are substantially same as those in the first stage, with difference in that the current PM values of Sl, S2 and S3 are stored into the empty cells Sl, S2 and S3 in the basic memory cells at the second stage and the butterfly units read the previous PM values of S1-S3 from the extended memory cells Sl '-S3'.
  • the calculation procedure for the odd-index stages is same as that for the first stage, and the calculation procedure for the even-index stages hereafter is same as that for the second stage.
  • the extended memory cells may be addressed by adding a fixed offset.
  • the memory cell with address 8 for storing Sl' in Fig.7 may be addressed by adding the address of the basic memory cell Sl with a fixed offset 7.
  • the extended memory cells may be arranged before the basic memory cells, and at this time, the address of an extended memory cell may be addressed by subtracting the address of a basic memory cell with a fixed offset.
  • Fig.10 illustrates the distribution and usage of the memory cells during the Viterbi decoding implementation process according to the third embodiment of the present invention.
  • the third embodiment shown in Fig.10 uses eleven memory cells and the calculation orders for the four butterfly units are same, with difference in that the different state transition causing the PM values of Sl -S3 to be stored into the extended memory cells.
  • the lower branch of the first butterfly unit calculates the current PM value of Sl, by using the previous PM values of SO and S4 in the basic memory cells. Since the previous PM values of Sl is still useful for calculating the current
  • the calculated current PM value of Sl is stored into an extended memory cell with address 8, identified with Sl', as shown by the dashed line pointing to Sl' in Fig.10. Then, the upper branch of the first butterfly unit calculates the current PM value of SO and stores it directly into a memory cell with address 0, that is, the current PM value of SO will overwrite its previous PM value, as indicated by the solid line pointing to SO in Fig.10.
  • the lower branch of the second butterfly unit calculates the current PM values of S3 and the upper branch calculates the current PM value of S2.
  • the two output results of the second butterfly unit are respectively stored into the extended memory cells with addresses 10 and 9.
  • the lower branch of the third butterfly unit calculates the current PM value of S5 and the upper branch calculates the current PM value of S4.
  • the two output results of the third butterfly unit are respectively stored into the basic memory cells with addresses 5 and 4.
  • the upper branch of the fourth butterfly unit calculates the current PM value of S6 and then the upper branch calculates the current PM value of S7.
  • the two output results of the fourth butterfly unit are respectively stored into the basic memory cells with addresses 6 and 7.
  • the above four butterfly units need the previous PM value of each state node calculated in the previous stage and stored in the memory cells, to calculate the PM value of the current stage.
  • the calculation orders for the four butterfly units are same as those in the odd-index stage, with difference in that in the odd-index stage the current PM values of Sl, S2 and S3 calculated by the butterfly units are stored into the empty cells Sl, S2 and S3 in the basic memory cells. Further, the butterfly units read the previous PM value of Sl -S3 from the extended memory cells Sl '-S3'.
  • the extended memory cells are consecutively arranged after the basic memory cells and are used for storing the PM values of Sl -S3, so the extended memory cells may be addressed by adding with a fixed offset.
  • a memory cell with address 8 for storing Sl' may be addressed by adding the address 1 of the basic memory cell Sl with a fixed offset 7.
  • the state diagram shown in Fig.8 may be implemented by a method same as that performed by the butterfly units in the embodiment shown in Fig.7. In this way, the PM values of S4-S6 are stored in the extended memory cells.
  • the Viterbi decoding implementation method proposed in the present invention in connection with three embodiments.
  • the constraint length of codes is K
  • the number of the basic memory cells is 2 K 1
  • the number of the extended memory cells is 2 K 2 -1
  • the state stored in an extended memory cell is S J , where je[ 2 K ⁇ 2 , 2 K 1 -2] or je[l, 2 K 2 -1].
  • the extended memory cells may be arranged sequentially before or after the basic memory cells and may be addressed by adding an appropriate fixed offset, but the location of the extended memory cells is not limited herein and may be provided together with the basic memory cells as well. Furthermore, in the above three embodiments, if the calculation orders for the upper and lower branches in the last butterfly unit are reversed, the number of the extended memory cells is added with 1. Thus the number of the extended memory cells is 2 K ⁇ 2 , and the states stored in the extended memory cells are je[2 K ⁇ 2 , 2 K 1 -1] or J G [I, 2 K ⁇ 2 ].
  • the Viterbi decoding implementation method as proposed above in the present invention may be implemented in software or hardware, or in combination of both.
  • the Viterbi decoder 400 comprises: an input unit 410, for receiving data to be decoded and having been processed with the predetermined coding and transmitted via a channel; a plurality of memory cells 430, for storing the path metric of each state node in the trellis diagram for the predetermined coding, when the constraint length of the predetermined codes is K, the number n of the memory cells satisfying 2 (K 1) ⁇ n ⁇ 2 *2 (K 1) , wherein the plurality of memory cells can be classified into basic and extended memory cells; an addressing unit 420, for determining the calculation order for calculating the current path metric of each state node by using the previous path metric of each state node according to the trellis diagram of the predetermined coding, so as to address the plurality of memory cells 430 according to the calculation order; a calculating unit 300, for calculating the current path metric of
  • the proposed Viterbi decoding implementation method reduces the number of the occupied memory cells by arranging the calculation order for the current PM value of each state node rationally.
  • the previous PM value of a state node no longer useful for subsequent calculation is overwritten by the calculated current PM value of the state node as possible as desired, so as to save the number of the memory cells for storing the PM values during decoding process.
  • the constraint length of the convolutional code is K
  • the number of the memory cells required during the decoding process is 1.5x2 K 1 -l, where there are 2 K 1 basic memory cells and 2 K 2 -1 extended memory cells.
  • some basic memory cells and some extended memory cells are used to store the previous PM values and current PM values of the state nodes in an alternate way, thus copy of the
  • the extended memory cells are arranged consecutively before or after the basic memory cells and may be addressed by adding a fixed offset, thus leading to easy addressing and simple logic.

Abstract

The invention provides a decoding method and apparatus, the method comprising: receiving data to be decoded; determining, according to a state transition diagram for a corresponding decoding algorithm, a calculation order for calculating the current path metric of each state node by using the previous path metric of each state node in the diagram; calculating and storing the current path metric of each state node according to the calculation order and the received data, wherein if the stored previous path metric of a state node is no longer useful for the calculation, the previous path metric is replaced with the current path metric of the state node; and searching for an optimal path according to the calculated path metric of each state node, so as to implement ML (Maximum Likelihood) decoding over the received data. The method and apparatus as proposed in the invention allows to reduce the number of the memory cells for storing the PM (Path Metric) value of each state node during the Viterbi decoding process and reduce unnecessary read and write operations for the PM values.

Description

VITERBI DECODER AND DECODING METHOD THEREOF
FIELD OF THE INVENTION
The present invention relates to a decoder and the decoding method thereof, specifically to a Viterbi decoder and the decoding method thereof, and more particularly to a Viterbi decoder capable of saving memory cells and the decoding method thereof.
BACKGROUND OF THE INVENTION
In 1967, Viterbi algorithm became well known for the first time as a decoding method for convolutional coding. Since then, it has been widely applied to data communication, data recording, digital signal processing and etc. This algorithm, for example, may be used to reduce recording errors in storage media, to eliminate inter-symbol interference, to improve character recognition ability and so on. Among these, Viterbi algorithm is suitable especially for performing ML (Maximum Likelihood) estimate over convolutionally coded data transmitted via band- limited channels.
Fig.l illustrates the structure of a convolutional coder with constraint length as K=3 and coding rate as 1/3. As shown in Fig.l, the convolutional coder outputs a set of coded bits GO Gl G2 on the basis of the input bits and the current state of the shift register D. Fig.2 illustrates the trellis diagram corresponding to the convolutional coder shown in Fig.l. In Fig.2, circles (state nodes) in each column represent the likely states 00, 01, 10 and 11 of the shift register, denoted as S0-S3 respectively, and the time increases from the left column to the right column. In Fig.2, a branch is formed by transfer from one state to another depends on the input bit values and the shifting direction of the shift register in Fig.l, and corresponds to the output coded bits. For example, in the case of the current state of the register is SO (00), if the input bit is 1, the shift register shown in Fig.l shifts rightward by one bit and the state transits to S2 (10), as indicated by the arrow directing to S2, thus the corresponding output bits are 111. Thereby, the trellis diagram as shown in Fig.2 covers all likely state transitions of the shift register in Fig.l.
For the convolutional coding shown in Fig.l, Viterbi algorithm searches in the trellis diagram of Fig.2 according to the convolutionally coded data received, to find out a state transition sequence having the smallest errors with the received coded data, that is, the minimum error path. The decoded data obtained along the minimum error path is the ML estimate over the convolutionally coded data.
To find the minimum error path, Viterbi algorithm estimates the occurrence probability of each path, i.e. calculates the metric of each path, during search in the trellis diagram of Fig.2. Generally, two types of metrics are adopted in Viterbi algorithm, that is, BM (Branch Metrics) and PM (Path Metrics). A BM value represents the occurrence probability of a branch and it is obtained by comparing the received signal and the expected value. Its metric includes Hamming distance, Euclidean distance, Manhattan distance or the like between code words. Apparently, the smaller the metric is, the higher the occurrence probability of the branch is. A PM value represents the probability for a set of symbols in the received signals to transit to the current state node. The PM value of the current state node can be calculated by using the PM value of the previous state node and the BM value between the state nodes.
Fig.3 illustrates an ACS (Add, Compare, Select) unit 100 for computing the current PM value of each state node in Viterbi algorithm. As shown in Fig.3, PMINI and PMiN2, the previous PM values of two state nodes (for example, SO and Sl) are fed into two adding units 110 respectively to be added with BM1 and BM2, the BM values of their respective state transitions (S0->S2 and S1->S2). In comparing unit 120, the two sums of the respective BM and PM values are compared. Finally, based on the comparison result, selecting unit 130 selects the smaller one as the current PM value of the state node (S2), that is,
PMout=min(PMiN1 +BM1;PMiN2+BM2).
It can be seen from the trellis diagram of Fig.2 that the previous PM values of two state nodes can be used together to calculate the current PM values of the two state nodes. For example, the previous PM values of SO and Sl can be used to calculate the current PM values of SO and Sl. Therefore, two ACS units 100 sharing the same inputs can be grouped to form a butterfly unit, and thus each butterfly unit may compute the PM values of two state nodes simultaneously.
In this way, Viterbi algorithm calculates the current PM value of each state node based on the previous PM value of each state node by using butterfly units. The state node with the smallest PM value is selected from the state nodes and the state transition sequence formed by the group of state nodes is the minimum error path.
From the above introduction to Viterbi algorithm, it can be seen that the previous PM value of each state node is always required for calculating the current PM value of each state node during the Viterbi decoding process, and thus the previous PM values of all state nodes need to be stored till the current PM values of all state nodes are calculated. As to the problem of storing the PM value of each state node during the Viterbi decoding process, several solutions are proposed in prior arts.
One prior art implementation method of Viterbi decoding is to employ a first memory array and a second memory array, each comprising memory cells having the same number as the states (2K 1) of convolutional coding, for example, each memory array comprising 4 memory cells in the case as shown in Fig.2. The first memory array is used to store the previous value of each state node and the second memory array to store the calculated current PM value of each state node. When the PM values of all state nodes are calculated, the current PM value of each state node in the second memory array is copied into the first memory array, to calculate the PM value of each state node in the next stage.
The deficiency of this Viterbi decoding implementation method resides in that two full-size memory arrays (each comprising 2K 1 memory cells) are required. This occupies valuable physical space and it is especially true when the number of the coding states is large. In addition, after the current PM value of each state node is calculated, all these PM values need to be copied into the first memory array, which requires a large number of read and write transactions. To reduce read and write operations, the above two memory arrays are used in an alternate way in another prior art implementation method of Viterbi decoding. Specifically, in the first stage, the first memory array is used to store the previous PM value of each state node and the second memory array is used to store the current PM value of each state node. In the second stage, the functions for the two memory arrays are exchanged, that is, the second memory array is used to store the previous PM value of each state node and the first memory array is used to store the current PM value of each state node. This approach can omit read and write operations for copying the PM values, but two full-size memory arrays are still required, hence the problem of occupying valuable physical space still exists. It is, therefore, necessary to provide a novel method and apparatus for reducing the number of the occupied memory cells and read/write operations during Viterbi decoding process by arranging the memory cells efficiently and storing the PM value of each state node rationally. OBJECT AND SUMMARY OF THE INVENTION
An object of the present invention is to provide a Viterbi decoding method and apparatus which allows to reduce the number of the memory cells for storing the PM value of each state node during the Viterbi decoding process, and reduce unnecessary read and write operations for the PM values, while making the data read/write logic to be simple during the decoding process.
To fulfill the above object, the present invention proposes a decoding method, comprising the steps of: receiving data to be decoded; determining, according to a state transition diagram for a corresponding decoding algorithm, a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the diagram; calculating and storing the current path metric of each state node according to the calculation order and the received data, wherein if the stored previous path metric of a state node is no longer useful for the calculation, the previous path metric is replaced with the current path metric of the state node; and searching for an optimal path according to the calculated path metric of each state node, so as to implement ML (Maximum Likelihood) decoding over the received data.
In one aspect of the present invention, the invention proposes a decoder, comprising: an input unit, for receiving data to be decoded; a plurality of memory cells, for storing the path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes read from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node, so as to implement ML decoding over the received data; wherein the addressing unit addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of the current path metrics of other state nodes, it is stored in a memory cell for storing the previous path metric of the state node; otherwise, it is stored in a reserved one of said plurality of memory cells.
In still another aspect of the invention, the invention proposes a UE (User Equipment), comprising: a transmitting unit, for transmitting radio signals; a receiving unit, for receiving data; a decoder, comprising an input unit, for receiving data to be decoded; a plurality of memory cells, for storing the path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating the current path metric of each state node with the previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes read from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node stored in the memory cells, so as to implement ML decoding over the received data; wherein the addressing unit addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of the current path metrics of other state nodes, it is stored in a memory cell for storing the previous path metric of the state node; otherwise, it is stored in a reserved one of said plurality of memory cells.
Other objects and attainments together with a fuller understanding of the invention will become apparent and appreciated by referring to the following descriptions and claims taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Detailed descriptions will be given below to specific embodiments of the invention in connection with accompanying drawings, in which: Fig.l illustrates the structure of a convolutional coder with constraint length as K=3;
Fig.2 illustrates a trellis diagram for the convolutional coder of Fig.1 ;
Fig.3 illustrates the structure of an ACS unit used in a Viterbi decoder;
Fig.4 illustrates the state diagram for the convolutional code with constraint length K=4 when the shift register of Fig.l shifts rightward; Fig.5 illustrates the structure of a butterfly unit for calculating the PM value of each state node in the state diagram shown in Fig.4;
Fig.6 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a first embodiment of the invention;
Fig.7 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a second embodiment of the invention;
Fig.8 shows the state diagram for the convolutional code with constraint length K=4 when the shift register shifts leftward;
Fig.9 illustrates the structure of a butterfly unit for calculating the PM value of each state node in the state diagram shown in Fig.8;
Fig.10 illustrates the distribution and usage of the memory cells for Viterbi decoding according to a third embodiment of the invention; Fig.11 illustrates the structure of a Viterbi decoder according to the present invention.
Throughout all the above drawings, like reference numerals will be understood to refer to like, similar or corresponding features or functions.
DETAILED DESCRIPTION OF THE INVENTION In the trellis diagram shown in Fig.2, the previous PM value of each state node may be used to calculate the current PM values of two state nodes. For example, the previous PM value of SO is only used to calculate the current PM values of SO and S2, and won't be used to calculate the current PM values of other state nodes. In this way, if the current PM value of S2 is calculated first, after the current PM value of SO is calculated, the previous PM value of SO is no longer useful, so the current PM value of SO can be stored in a memory cell for storing its previous PM value, thus to save a memory cell. However, the previous PM value of S2 is still useful for calculating the current PM values of Sl and S3, so the calculated current PM value of S2 needs to be stored in a new memory cell, to avoid overwriting the previous PM value of S2. According to the above features, if the calculation order for the current PM value of each state node is arranged rationally according to the trellis diagram before calculating PM values, the number of the memory cells occupied during the Viterbi decoding process can be saved greatly, that is, a memory cell in which there is the previous PM value of a state node not useful for subsequent calculation, is used as much as possible to store the calculated current PM value of the state node, and then the current PM value of each state node is calculated and stored according to this calculation order. The following description will be given to three specific embodiments of the invention in conjunction with accompanying drawings, to explain the procedure for the Viterbi decoding implementation method of the present invention based on the above idea.
(Embodiment 1)
Fig.4 shows the state diagram for the convolutional code with constraint length K=4 when the shift register of Fig.l shifts rightward. The Viterbi decoding implementation method as proposed in the present invention will be introduced as follows, exemplified by decoding the convolutional codes in the state diagram of Fig.4. In the state diagram of Fig.4, the state number for the convolutional codes with constraint length K=4 is 2K 1=8, that is eight states S0-S7. Accordingly, four butterfly units are required to calculate the current PM value of each state node respectively. The specific structure for each butterfly unit is shown in Fig.5. There are two ACS units IOOA and IOOB in the butterfly unit 200 of Fig.5. The inputs for both the two ACS units are the previous PM values of S2, and S2j+1, and their respective outputs are the current PM values of Sj and
Sn, where n = j + 2K~2, j is a positive integer and j=0~3 in this embodiment. The ACS unit IOOA for calculating the PM value of Sj is denoted as ACS upper branch, and the ACS unit IOOB for calculating the PM value of Sn is denoted as ACS lower branch. It's assumed herein that the required first to fourth butterfly units correspond to the four butterfly units j=0, 1, 2 and 3 in Fig.4 respectively. For example, as to the first butterfly unit j=0, its inputs are the previous PM values of SO and Sl, and its outputs are the current PM values of SO and S4.
Fig.6 illustrates the distribution and usage of the memory cells during decoding the convolutional codes shown in Fig.4 when employing the Viterbi decoding implementation method according to the first embodiment of the invention.
In Fig.6, each column represents the state of each memory cell in a stage (or at an instant) during the Viterbi decoding process. Each circle in a column represents a memory cell, and the reference number Sx in the circle indicates the PM value of which state node is stored in the memory cell. In this embodiment, the reference number Sx also identifies each memory cell. Different columns in Fig.6 indicate the states of the memory cells in different stage during the Viterbi decoding process and the time increases from the left column to the right column. Only four stages are shown in Fig.6, wherein calculation of the current PM value of each state node in each stage needs the PM value of each state node in the previous stage.
As shown in Fig.6, eight memory cells with addresses 0-7 are referred as the basic memory cells. In the initial stage, the initial PM values of the eight states S0-S7 shown in Fig.4 are stored in the eight basic memory cells. Three memory cells with addresses 8-10 are referred as the extended memory cells. When the previous PM value of a state node is useful for calculation of the current PM values of other state nodes, the calculated current PM value of the state node is stored in an extended memory cell. Hence, 8+3=11 memory cells are required for Viterbi decoding in this embodiment.
A description will be given to the states and usage of the eleven memory cells in each stage during Viterbi decoding, taken in conjunction with Fig.6.
First, in the initial stage, the initial PM values for states S0-S7 are stored in the eight basic memory cells respectively. At this time, the three extended cells denoted as S4', S5' and S6' are empty.
In the first stage, the above four butterfly units need the previous PM values of the eight states stored in the basic memory cells in the initial stage, to calculate the PM values in the current stage. The specific procedure is as follows.
First, the lower branch of the first butterfly unit calculates the current PM value of S4, by using the previous PM values of SO and Sl in the memory cells. The previous PM value of S4 is still useful for calculating the current PM values of S2 and S6, so the current PM value of S4 calculated by the butterfly unit is stored into an extended memory cell with address 8, identified with S4' as shown by the dashed line pointing to S4' in Fig.6. Similarly, the upper branch of the first butterfly unit then calculates the current PM value of SO, by using the previous PM values of SO and Sl in the memory cells. The previous PM value of SO is no longer useful for calculating the current PM values of other state nodes, thus the current PM value of SO calculated by the butterfly unit may be stored into a memory cell with address 0, that is, the current PM value of SO will overwrite its previous PM value, as indicated by the solid line pointing to SO in Fig.6.
Then, the lower branch of the second butterfly unit first calculates the current PM value of S5. Since the previous PM value of S5 is still useful for calculating the current PM values of S2 and S6, the calculated current PM value of S5 is stored into an extended memory cell with address 9, identified with S5' as shown by the dashed line pointing to S5' in Fig.6. Then, the upper branch of the second butterfly unit calculates the current PM value of Sl. The previous PM value of Sl is no longer useful, so the calculated current PM value of Sl may be stored into a memory cell with address 1, that is, the current PM value of Sl will overwrite its previous PM value, as indicated by the solid line pointing to Sl in Fig.6.
If follows similarly that the lower branch of the third butterfly unit first calculates the current PM value of S6 and stores it into an extended memory cell with address 10, identified with S6', as indicated by the dashed line pointing to S6' in Fig.6. Then, the upper branch of the third butterfly calculates the current PM value of S2 and stores it into a memory cell with address 2, that is, the current PM value of S2 will overwrite its previous PM value, as indicated by the solid line pointing to S2 in Fig.6. Finally, the upper branch of the fourth butterfly unit first calculates the current PM value of S3. Since the previous PM value of S3 is no longer useful, the calculated current PM value of S3 is stored directly into a basic memory cell with address 3, as indicated by the dashed line pointing to S3 in Fig.6. Then, the lower branch of the fourth butterfly unit calculates the current PM value of S7, and stores it directly into a memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.6.
In this way, after calculation in the first stage, the current PM values of the eight states are stored respectively into the memory cells identified with S0-S3, S7 and S4'~S6'. The PM values in S4-S6 are invalid now, and thus these memory cells are considered empty.
In the second stage, to calculate the PM values in the current stage, the above four butterfly units need the previous PM value of each state node calculated and stored in the memory cells in the first stage. The specific procedure is as follows. First, the lower branch of the first butterfly unit calculates the current PM values of
S4, by using the previous PM values of SO and Sl in the basic memory cells. Since the previous PM value of S4 is still useful, the calculated current PM value of S4 will be stored into an empty basic memory cell with address 4, as indicated by the dashed line pointing to S4 in Fig.6. Then, the upper branch of the first butterfly calculates the current PM value of SO and stores it directly into a memory cell with address 0, as indicated by the solid line pointing to SO in Fig.6.
Next, similar to the first butterfly unit, the lower branch of the second butterfly unit first calculates the current PM value of S5 and stores it into an empty basic memory cell with address 5. Then, the upper branch of the second butterfly unit calculates the current PM value of Sl and stores it directly into the memory cell with address 1.
Subsequently, the lower branch of the third butterfly unit first calculates the current PM value of S6, by using the previous PM values of S4 and S5 stored in the extended memory cells S4' and S5', and stores it into an empty basic memory cell S6 with address 6.
Then, the upper branch of the third butterfly unit calculates the current PM value of S2 and stores it into the memory cell with address 2, that is, the current PM value of S2 will overwrite its previous PM value.
Finally, the upper branch of the fourth butterfly unit first calculates the current PM value of S3, by using the previous PM value of S6' stored in an extended memory cell and the previous PM value of S7 stored in a basic memory cell, and stores it directly into a basic memory cell with address 3. Similarly, the lower branch of the fourth butterfly unit then calculates the current PM value of S7 by using the previous PM value of S6' stored in an extended memory cell and the previous PM value of S7 stored in a basic memory cell, and stores it into the memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.6.
In this way, after calculation of the second stage, the current PM values of the eight states are again stored in the basic memory cells with addresses 0-7 and the PM values in the extended memory cells are invalid now and they can be considered empty. The calculation order for each butterfly unit in the second stage is substantially same as that in the first stage, with difference in that the current PM values of S4-S6 are stored into the corresponding empty cells in the basic memory cells at the second stage and the previous PM values of S4-S6 are read from the extended memory cells.
The calculation procedure for the third stage and the odd-index stages hereafter is exactly same as that for the first stage, and the calculation procedure for the fourth stage and the even-index stages hereafter is exactly same as that for the second stage.
In this embodiment, since the extended memory cells are arranged after the basic memory cells and are used to store the PM values of S4-S6, the extended memory cells may be addressed by adding a fixed offset. For example, the memory cell with address 8 for storing S4' in Fig.6 may be addressed by adding the address of the basic memory cell
S4 with a fixed offset 4. Alternatively, the extended memory cells may be arranged before the basic memory cells, and at this time, the address of an extended memory cell may be addressed by subtracting the address of a basic memory cell with a fixed offset. (Embodiment 2)
Fig.7 illustrates the distribution and usage of the memory cells during decoding the convolutional codes shown in Fig.4 according to the Viterbi decoding implementation method of the second embodiment of the present invention.
Same as the embodiment in Fig.6, the second embodiment in Fig.7 uses 11 memory cells, with difference in that the calculation orders for the four butterfly units are different, causing the PM values of Sl -S3 to be stored in the extended memory cells.
In the first stage, the above four butterfly units need the previous PM values of the eight states stored in the basic memory cells in the initial stage, to calculate the PM values in the current stage. The specific procedure is as follows.
First, the upper branch of the fourth butterfly unit calculates the current PM value of S3, by using the previous PM values of S6 and S7 in the basic memory cells. The previous PM value of S3 is still useful for calculating the current PM values of Sl and S5, so the calculated current PM value of S3 is stored into the extended memory cell with address 10, identified with S3' as shown by the dashed line pointing to S3' in Fig.7. Then, the lower branch of the fourth butterfly unit calculates the current PM value of S7, also by using the previous PM values of S6 and S7 in the basic memory cells. The previous PM value of S7 is no longer useful for calculating the current PM values of other state nodes, so the calculated current PM value of S7 may be stored into a memory cell with address 7, that is, the current PM value of S7 will overwrite its previous PM value, as indicated by the solid line pointing to S7 in Fig.7.
Next, the upper branch of the third butterfly unit first calculates the current PM value of S2 and stores it into an extended memory cell with address 9, as indicated by S2'. Then, the lower branch of the third butterfly unit calculates the current PM value of S6 and stores it into a basic memory cell with address 6, that is, the current PM value of S6 will overwrite its previous PM value.
Similarly it follows that the upper branch of the second butterfly unit first calculates the current PM value of Sl and stores it into an extended memory cell with address 8, identified with Sl', as indicated by the dashed line pointing to S6' in Fig.7. Then, the lower branch of the second butterfly unit calculates the current PM value of S5 and stores it into a basic memory unit with address 5, that is, the current PM value of S5 will overwrite its previous PM value.
Finally, the lower branch of the first butterfly unit calculates the current PM value of S4 and stores it directly into a basic memory cell with address 4. Then, the upper branch of the first butterfly unit calculates the current PM value of SO and stores it directly into a basic memory cell with address 0.
In this way, after the calculation in the first stage, the current PM values of the eight states are stored respectively in the memory cells identified with SO, S4-S7 and Sl '-S3', and the PM values in Sl -S3 are invalid now and these cells can be considered empty.
In the second stage, the above four butterfly units need the previous PM value of each state node calculated and stored in the memory cells in the first stage, to calculate the PM values at the current stage. The calculation orders for the four butterfly units are substantially same as those in the first stage, with difference in that the current PM values of Sl, S2 and S3 are stored into the empty cells Sl, S2 and S3 in the basic memory cells at the second stage and the butterfly units read the previous PM values of S1-S3 from the extended memory cells Sl '-S3'.
In the Viterbi decoding shown in Fig.7, the calculation procedure for the odd-index stages is same as that for the first stage, and the calculation procedure for the even-index stages hereafter is same as that for the second stage.
Similar to the first embodiment, in this embodiment, since the extended memory cells are arranged after the basic memory cells and are used to store the PM values of S1-S3, the extended memory cells may be addressed by adding a fixed offset. For example, the memory cell with address 8 for storing Sl' in Fig.7 may be addressed by adding the address of the basic memory cell Sl with a fixed offset 7. Alternatively, the extended memory cells may be arranged before the basic memory cells, and at this time, the address of an extended memory cell may be addressed by subtracting the address of a basic memory cell with a fixed offset.
(Embodiment 3)
Taken in conjunction with Fig.6 and Fig.7, the above description has been given to the procedure of Viterbi decoding the convolutional codes shown in Fig.4 according to the idea proposed in the present invention. The idea proposed herein is suitable not only for the type of codes where the shift register shifts rightward, but also for the type of codes where the shift register shifts leftward. The following explanation is given to the procedure of Viterbi decoding the convolutional codes shifted leftward in the present invention, in conjunction with accompanying drawings. Fig.8 illustrates the state diagram for the convolutional codes K=4 with where shift register shifts leftward. Since the shift register shifts leftward, the state transitions are different from the case shown in Fig.4. For example, in case that the current state of the shift register is SO (00), if the input bit is 1, the state of the shift register shifts left and transits to the next state Sl (01). This embodiment intends to explain the Viterbi decoding implementation method proposed in the present invention, by decoding the convolutional codes having the state diagram shown in Fig.8.
With regard to the state diagram shown in Fig.8, four butterfly units are still required to calculate the PM value of each state respectively. The specific structure for each butterfly unit is shown in Fig.9. The butterfly architecture 300 of Fig.9 is different from the butterfly architecture 200 of Fig.4 in that its inputs are the PM values of states Sj and Sn, where n = j + 2K~2 and their respective outputs are the PM values of S2, and S2j+1, j=0, 1, 2, 3. It is assumed herein that the first to fourth butterfly units respectively represent the butterfly units j=0, 1, 2, 3 shown in Fig.9. For example, with regard to the second butterfly unit j=l, its inputs are the previous PM values of Sl and S5 and its outputs are the current PM values of S2 and S3.
Fig.10 illustrates the distribution and usage of the memory cells during the Viterbi decoding implementation process according to the third embodiment of the present invention.
Same as the embodiment shown in Fig.6, the third embodiment shown in Fig.10 uses eleven memory cells and the calculation orders for the four butterfly units are same, with difference in that the different state transition causing the PM values of Sl -S3 to be stored into the extended memory cells.
In an odd-index stage, the lower branch of the first butterfly unit calculates the current PM value of Sl, by using the previous PM values of SO and S4 in the basic memory cells. Since the previous PM values of Sl is still useful for calculating the current
PM values of states S2 and S3, the calculated current PM value of Sl is stored into an extended memory cell with address 8, identified with Sl', as shown by the dashed line pointing to Sl' in Fig.10. Then, the upper branch of the first butterfly unit calculates the current PM value of SO and stores it directly into a memory cell with address 0, that is, the current PM value of SO will overwrite its previous PM value, as indicated by the solid line pointing to SO in Fig.10.
Next, the lower branch of the second butterfly unit calculates the current PM values of S3 and the upper branch calculates the current PM value of S2. The two output results of the second butterfly unit are respectively stored into the extended memory cells with addresses 10 and 9.
Similarly, the lower branch of the third butterfly unit calculates the current PM value of S5 and the upper branch calculates the current PM value of S4. The two output results of the third butterfly unit are respectively stored into the basic memory cells with addresses 5 and 4.
Finally, the upper branch of the fourth butterfly unit calculates the current PM value of S6 and then the upper branch calculates the current PM value of S7. The two output results of the fourth butterfly unit are respectively stored into the basic memory cells with addresses 6 and 7.
In this way, after calculation at the odd-index stage, the current PM values of the eight states are respectively stored into the memory cells identified with SO, S4-S7 and Sl '-S3' and the PM values in Sl -S3 are invalid now and these cells can be considered empty.
In the even-index stage, the above four butterfly units need the previous PM value of each state node calculated in the previous stage and stored in the memory cells, to calculate the PM value of the current stage. The calculation orders for the four butterfly units are same as those in the odd-index stage, with difference in that in the odd-index stage the current PM values of Sl, S2 and S3 calculated by the butterfly units are stored into the empty cells Sl, S2 and S3 in the basic memory cells. Further, the butterfly units read the previous PM value of Sl -S3 from the extended memory cells Sl '-S3'.
In this embodiment, the extended memory cells are consecutively arranged after the basic memory cells and are used for storing the PM values of Sl -S3, so the extended memory cells may be addressed by adding with a fixed offset. For example, a memory cell with address 8 for storing Sl' may be addressed by adding the address 1 of the basic memory cell Sl with a fixed offset 7.
Alternatively, the state diagram shown in Fig.8 may be implemented by a method same as that performed by the butterfly units in the embodiment shown in Fig.7. In this way, the PM values of S4-S6 are stored in the extended memory cells.
Detailed descriptions are given above to the Viterbi decoding implementation method proposed in the present invention, in connection with three embodiments. The three embodiments all exemplify the coding case with K=4, but the Viterbi decoding implementation as proposed in the present invention may be applied to codes with arbitrary constraint length. When the constraint length of codes is K, based on the Viterbi decoding implementation of the present invention, the number of the basic memory cells is 2K 1, the number of the extended memory cells is 2K 2-1, and the state stored in an extended memory cell is SJ, where je[ 2K~2, 2K 1-2] or je[l, 2K 2-1]. The extended memory cells may be arranged sequentially before or after the basic memory cells and may be addressed by adding an appropriate fixed offset, but the location of the extended memory cells is not limited herein and may be provided together with the basic memory cells as well. Furthermore, in the above three embodiments, if the calculation orders for the upper and lower branches in the last butterfly unit are reversed, the number of the extended memory cells is added with 1. Thus the number of the extended memory cells is 2K~2, and the states stored in the extended memory cells are je[2K~2, 2K 1-1] or J G [I, 2K~2].
The Viterbi decoding implementation method as proposed above in the present invention may be implemented in software or hardware, or in combination of both.
Fig.11 illustrates the structure of a Viterbi decoder according to the present invention. As shown in Fig.ll, the Viterbi decoder 400 comprises: an input unit 410, for receiving data to be decoded and having been processed with the predetermined coding and transmitted via a channel; a plurality of memory cells 430, for storing the path metric of each state node in the trellis diagram for the predetermined coding, when the constraint length of the predetermined codes is K, the number n of the memory cells satisfying 2(K 1) < n < 2 *2(K 1), wherein the plurality of memory cells can be classified into basic and extended memory cells; an addressing unit 420, for determining the calculation order for calculating the current path metric of each state node by using the previous path metric of each state node according to the trellis diagram of the predetermined coding, so as to address the plurality of memory cells 430 according to the calculation order; a calculating unit 300, for calculating the current path metric of a state node, according to the data received by the input unit 410, by using the previous path metrics of state nodes read from the memory cell 430 addressed by the addressing unit 420, and storing the calculation result into the memory unit 430 addressed by the addressing unit, wherein the structure of the calculating unit is shown in Fig.5 or Fig.9; a searching unit 440, for searching for an optimal path according to the path metric of each state node stored in the memory cells 430, to obtain the maximum likelihood decoded data of the received data; wherein the addressing unit 420 addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of the current path metric of a subsequent state node, it is stored in the memory cell 430 for storing the previous path metric of the state node; otherwise, it is stored in an empty memory cell of said plurality of memory cells.
Advantageous Effects of the Invention
Detailed descriptions are given above to the Viterbi decoding implementation method of the present invention with reference to accompanying drawings and three embodiments. As described above, the proposed Viterbi decoding implementation method reduces the number of the occupied memory cells by arranging the calculation order for the current PM value of each state node rationally.
Specifically, with the proposed Viterbi decoding implementation, the previous PM value of a state node no longer useful for subsequent calculation is overwritten by the calculated current PM value of the state node as possible as desired, so as to save the number of the memory cells for storing the PM values during decoding process. If the constraint length of the convolutional code is K, the number of the memory cells required during the decoding process is 1.5x2K 1-l, where there are 2K 1 basic memory cells and 2K 2-1 extended memory cells.
In the Viterbi decoding implementation method proposed in the present invention, some basic memory cells and some extended memory cells are used to store the previous PM values and current PM values of the state nodes in an alternate way, thus copy of the
PM values is omitted and unnecessary write and read operations can be saved. Furthermore, in the Viterbi decoding implementation method proposed in the present invention, the extended memory cells are arranged consecutively before or after the basic memory cells and may be addressed by adding a fixed offset, thus leading to easy addressing and simple logic.
It is to be understood by those skilled in the art that various improvement and modifications can be made to the Viterbi decoder and the decoding method thereof as disclosed in the present invention without departing from the scope of appended claims herein.

Claims

CLAIMS:
1. A decoding method, comprising the steps of:
(a) receiving data to be decoded; (b) determining, according to a state transition diagram for a corresponding decoding algorithm, a calculation order for calculating current path metric of each state node with previous path metric of each state node in the diagram;
(c) calculating and storing the current path metric of each state node according to the calculation order and the received data, wherein when the stored previous path metric of a state node is no longer useful for the calculation, the previous path metric is replaced with the current path metric of the state node; and
(d) searching for an optimal path according to the calculated path metric of each state node, so as to implement ML (Maximum Likelihood) decoding over the received data.
2. The decoding method according to claim 1, wherein step (c) further comprises: storing the calculated current path metric of each state node into a corresponding one of a plurality of memory cells, wherein the number n of the memory cells satisfies 2(K 1) < n < 2 *2(K 1), wherein K is a constraint length for predetermined code.
3. The decoding method according to claim 2, wherein at step (c), when the previous path metric of a state node is no longer useful for calculating the current path metrics of subsequent state nodes, it is stored in a memory cell for storing the previous path metric of the state node; otherwise, it is stored in a reserved one of the plurality of memory cells.
4. The decoding method according to claim 3, wherein in step (c) adopting butterfly computation to calculate the current path metrics of two state nodes by using the previous path metrics of corresponding two state nodes.
5. The decoding method according to claim 4, wherein, when one of the two state nodes is same as one of the two corresponding state nodes having the previous path metrics, calculating the current path metric of the other one of the two state nodes firstly.
6. The decoding method according to claim 3, wherein the reserved memory cells are used to store the current path metrics of some state nodes in an alternate way.
7. The decoding method according to claim 3, wherein the reserved memory cells are arranged consecutively and adjacent to the memory cells for storing the path metric of each state node.
8. The decoding method according to claim 5, wherein the number of the memory cells is n = 1.5*2(K 1)-l.
9. The decoding method according to any one of claims 1 to 8, wherein the decoding method is Viterbi decoding method.
10. A decoder, comprising: an input unit, for receiving data to be decoded; a plurality of memory cells, for storing path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating current path metric of each state node, wherein the calculation order is calculated according to previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes reading from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node, so as to obtain maximum likelihood data of the received data; wherein the addressing unit addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of the current path metrics of other state nodes, it is stored in a memory cell for storing the previous path metric of the state node; otherwise, it is stored in a reserved one of the plurality of memory cells.
11. The decoder according to claim 10, wherein the calculating unit comprises: two ACS (Add, Compare, Select) units, for reading previous path metrics of two state nodes and calculating current path metrics of the two corresponding state nodes by using butterfly computation.
12. The decoder according to claim 11, wherein if one of the two corresponding state nodes is same as one of the two state nodes having the previous path metrics, the two ACS units first calculate the current path metric of the other one of the two corresponding state nodes.
13. The decoder according to claim 12, wherein the reserved memory cells are used to store the current path metrics of some state nodes in an alternate way.
14. The decoder according to claim 12, wherein the reserved memory cells are arranged consecutively and adjacent to the memory cells for storing the path metric of each state node.
15. The decoder according to claim 12, wherein the number of the memory cells is n = 1.5*2(K'1)-1.
16. A user equipment (UE), comprising: a transmitting unit, for transmitting radio signals; a receiving unit, for receiving data; a Viterbi decoder, comprising: an input unit, for receiving data to be decoded; a plurality of memory cells, for storing path metric of each state node in a state transition diagram of a corresponding decoding algorithm; an addressing unit, for addressing the plurality of memory cells according to a calculation order for calculating current path metric of each state node, wherein the calculation order is calculated according to previous path metric of each state node in the state transition diagram; a calculating unit, for calculating the current path metric of a state node with the previous path metrics of the corresponding state nodes reading from the memory cells addressed by the addressing unit, and storing the calculation result into a memory cell designated by the addressing unit; and a searching unit, for searching for an optimal path according to the calculated path metric of each state node stored in the memory cells, so as to obtain maximum likelihood data of the received data; wherein the addressing unit addresses the memory cells in such a way that if the previous path metric of a state node is no longer useful for calculation of the current path metrics of other state nodes, it is stored in a memory cell for storing the previous path metric of the state node; otherwise, it is stored in a reserved one of the plurality of memory cells.
17. The UE according to claim 16, wherein the calculating unit comprises: two ACS (Add, Compare, Select) units, for reading the previous path metrics of two state nodes and calculating the current path metrics of the two corresponding state nodes by using butterfly computation respectively.
18. The UE according to claim 17, wherein if one of the two corresponding state nodes is same as one of the two state nodes having the previous path metrics, the two ACS units first calculate the current path metric of the other one of the two corresponding state nodes.
PCT/IB2006/052071 2005-06-28 2006-06-26 Viterbi decoder and decoding method thereof WO2007000708A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200510079180.8 2005-06-28
CN200510079180 2005-06-28

Publications (1)

Publication Number Publication Date
WO2007000708A1 true WO2007000708A1 (en) 2007-01-04

Family

ID=37114468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/052071 WO2007000708A1 (en) 2005-06-28 2006-06-26 Viterbi decoder and decoding method thereof

Country Status (1)

Country Link
WO (1) WO2007000708A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0660534A2 (en) * 1993-12-22 1995-06-28 AT&T Corp. Error correction systems with modified viterbi decoding
EP0945989A1 (en) * 1998-03-12 1999-09-29 Hitachi Micro Systems Europe Limited Viterbi decoding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0660534A2 (en) * 1993-12-22 1995-06-28 AT&T Corp. Error correction systems with modified viterbi decoding
EP0945989A1 (en) * 1998-03-12 1999-09-29 Hitachi Micro Systems Europe Limited Viterbi decoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHIEN-MING WU ET AL.: "VLSI architecture of extended in-place path metric update for viterbi decoders", PROC., IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2001, SYDNEY, AUSTRALIA, vol. VOL. 1 OF 5, 6 May 2001 (2001-05-06), pages 206 - 209, XP010541829, ISBN: 0-7803-6685-9 *
RADER C. M.: "Memory management in a Viterbi decoder", IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. COM-29, no. 9, September 1981 (1981-09-01), pages 1399 - 1401, XP000877057, ISSN: 0090-6778 *

Similar Documents

Publication Publication Date Title
US5502735A (en) Maximum likelihood sequence detector
US5432803A (en) Maximum likelihood convolutional decoder
US6324226B1 (en) Viterbi decoder
US20070266303A1 (en) Viterbi decoding apparatus and techniques
JPH10107651A (en) Viterbi decoder
US6333954B1 (en) High-speed ACS for Viterbi decoder implementations
US5781569A (en) Differential trellis decoding for convolutional codes
US20050157823A1 (en) Technique for improving viterbi decoder performance
US7277507B2 (en) Viterbi decoder
EP3996285A1 (en) Parallel backtracking in viterbi decoder
US5996112A (en) Area-efficient surviving paths unit for Viterbi decoders
KR100785671B1 (en) Method and apparatus for efficiently reading and storing state metrics in memory for high-speed acs viterbi decoder implementations
US7035356B1 (en) Efficient method for traceback decoding of trellis (Viterbi) codes
US20070168846A1 (en) Data decoding apparatus and method in a communication system
US20070201586A1 (en) Multi-rate viterbi decoder
US20020199154A1 (en) Super high speed viterbi decoder and decoding method using circularly connected 2-dimensional analog processing cell array
JP2010206570A (en) Decoding apparatus and decoding method
WO2007000708A1 (en) Viterbi decoder and decoding method thereof
EP1192719A1 (en) Viterbi decoder
KR100491016B1 (en) Trace-Back Viterbi Decoder with Consecutive Control of Backward State Transition and Method thereof
JP3343217B2 (en) Viterbi comparison / selection operation for 2-bit traceback coding
KR20040031323A (en) Recording apparatus and method for path metrics of vitervi decoder
JP2002198827A (en) Maximum likelihood decoding method and decoder thereof
JP2004120791A (en) Viterbi decoder
JP2001144631A (en) Viterbi decoder

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06765855

Country of ref document: EP

Kind code of ref document: A1