WO2011048997A1 - Soft output decoder - Google Patents

Soft output decoder Download PDF

Info

Publication number
WO2011048997A1
WO2011048997A1 PCT/JP2010/068021 JP2010068021W WO2011048997A1 WO 2011048997 A1 WO2011048997 A1 WO 2011048997A1 JP 2010068021 W JP2010068021 W JP 2010068021W WO 2011048997 A1 WO2011048997 A1 WO 2011048997A1
Authority
WO
WIPO (PCT)
Prior art keywords
soft output
processing unit
backward
code
memory
Prior art date
Application number
PCT/JP2010/068021
Other languages
French (fr)
Japanese (ja)
Inventor
利彦 岡村
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2011048997A1 publication Critical patent/WO2011048997A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding

Definitions

  • the present invention relates to soft output decoding, and more particularly, to reduction of load due to processing related to error correction processing at the time of decoding.
  • the error correction coding technique is a technique for protecting data from errors such as bit inversion occurring on a communication path during data transmission through operations such as data coding and decoding.
  • Encoding is an operation of converting information to be transmitted into a codeword with redundancy added.
  • Decoding is an operation of estimating the original codeword (information) from the codeword (received word) in which an error is mixed using the redundancy.
  • This technology has already been widely used in communications and storage media, especially in speeding up wired communications such as wireless communications or ADSL (Asymmetric Digital Subscriber Line), or increasing the capacity of storage media such as optical disks. It is an indispensable technology.
  • decoding is a process of estimating an original codeword, and a plurality of methods can be considered even if the same code is used.
  • the decoding result is given as a code word or an information bit string for generating it, but a decoding method for estimating each information bit with a weight with respect to a received sequence is also known. This is called a soft output decoding method.
  • Optimal soft-output decoding is decoding that outputs a conditional probability of the information bit or symbol based on the received sequence for the code word under the constraint that the bit string constitutes the code word for each information symbol. . This is called posterior probability decoding.
  • a posterior probability decoding method with a relatively small amount of calculation is known in decoding processing, particularly in decoding of a code that can be represented by a code trellis with a small number of states.
  • This method can be logically realized as a process on a code trellis, similarly to Viterbi decoding known as a maximum likelihood decoding method of a convolutional code.
  • This method is called BCJR (Bahl, Cocke, Jelinek, Raviv) algorithm or MAP (Maximum A Posteriori probability) algorithm described in Non-Patent Document 1.
  • the input received word Y is not a hard decision result in which each bit is determined to be 0 or 1, but a value obtained by adding reliability to the determination of each bit, that is, a soft decision result (soft input value). It can be a decoding method. For this reason, this method is also called a soft input / soft output decoding method.
  • This technique is an important element in the decoding of the turbo code described in Non-Patent Document 2.
  • Non-Patent Document 2 for ease of explanation, a case where a symbol is a binary code will be described. However, the present invention can be extended to a case where a symbol is composed of multi-level symbols.
  • FIG. 22 is an explanatory diagram showing the structure of a general turbo encoder 900 and decoder 1000.
  • a turbo encoder 900 shown in FIG. 22A is configured by connecting two systematic convolutional encoders 901 and 902 having feedback via an interleaver 903 in parallel.
  • This convolutional code is called an element code of a turbo code, and a code having 4 or less memories is usually used.
  • FIG. 22 shows an example in the case of 2 memories.
  • the convolutional encoder 901 includes two memories 901a and 901b and three adders 901c and 901e.
  • the convolutional encoder 902 is composed of two memories 902a-b and three adders 902c-e.
  • the convolutional encoder 901 is referred to as element code 1 and 902 as element code 2, and the parity sequences generated by them are referred to as parity 1 and parity 2, respectively.
  • Interleaver 903 rearranges the output bits from convolutional encoders 901 and 902. The size and design of the interleaver 903 greatly affects the encoding performance.
  • a turbo decoder 1000 shown in (b) of FIG. 22 corresponds to soft input / soft output decoding means 1001 corresponding to the element code output from the turbo encoder 900, and corresponds to the information sequence, parity 1, and parity 2, respectively. And memories 1012, 1013, and 1014 that hold received values.
  • the turbo code decoding method is to use a soft output value obtained by soft output decoding of one element code as a soft input value (preliminary information) of the other element code and repeat this.
  • the soft output value exchanged in the process of this repetition is not the value L (t) of Equation 1 but a value Le (t) called external information represented by Equation 2 calculated from L (t). .
  • y (t) is a received value for information bit u (t)
  • La (t) is a soft output value obtained by soft output decoding of the other element code used as prior information of u (t)
  • C is It is a constant determined according to the SN ratio of the communication path.
  • the processing of the interleaver 1003 and the deinterleaver 1016 is realized as memory address generation processing.
  • the above MAP algorithm (BCJR algorithm) will be described in detail.
  • the MAP algorithm is an algorithm using a code trellis as in the Viterbi algorithm well known as the maximum likelihood decoding method.
  • the code word for the input information changes according to the value of the memory in the encoder. This memory value in the encoder is called the “state” of the encoder. Coding using a convolutional code is performed while changing the state according to the information sequence.
  • the code trellis is a graph representing a combination of transitions of this state, and is configured by assigning an edge to a pair of states where a transition exists, with each state of the encoder at each point as a node.
  • the edge is determined by input information in each state, and the code word at that time is assigned as a label.
  • the connection of edges is called a path, and a codeword sequence corresponds to the path.
  • the state of all transmitted data is related to the state before and after that. For this reason, if a code trellis is used, an error occurring in the transmission path can be corrected with higher accuracy.
  • a time-invariant convolutional code has a feature that has a trellis structure that is invariant with respect to a point in time. Therefore, if the number of states is small, it is easy to execute an algorithm using a code trellis.
  • FIG. 23 is an explanatory diagram showing a more detailed configuration of the turbo encoder 900 shown in FIG.
  • FIG. 23A shows a convolutional code (number of memories 2) of the element code shown in FIG. 22, and FIG. 23B shows a corresponding code trellis.
  • the initial state is that all the values of the memories 1012 to 1014 are 0, and the encoder state is expressed by a series of memory bits.
  • the convolutional encoder 901 shown in FIG. 23A is composed of the two memories 901a and 901b and the three adders 901c and 901e as described above.
  • the MAP algorithm is roughly divided into a forward process for calculating a path metric that reaches each node from the head of the code trellis, a backward process for calculating a path metric that reaches each node from the end of the code trellis, and the results of both.
  • the soft output generation process for calculating the soft output (posterior probability ratio) of the information symbol at each point of time, and the above three types of processes.
  • the path metric in forward processing relatively represents the logarithmic value of the probability of reaching each node from the top of the trellis under the received sequence and prior information.
  • the path metric in backward processing relatively represents the logarithmic value of the probability of reaching each node from the end of the trellis.
  • path metrics calculated by forward processing and backward processing at nodes at time t and state s are represented by ⁇ ( t, s) and ⁇ (t, s).
  • the transition at the time t from the state s to the state s ′, calculated from the received value and prior information (soft output of the other element code in the case of a turbo code) by ⁇ (t, s, s ′) Represents a branch metric corresponding to a probability.
  • This can be calculated as the likelihood between the codeword corresponding to the transition from s to s ′ and the received value, and ⁇ (t, s, s ′) is the transition from s to s ′ in the white Gaussian channel. It can be easily calculated using the Euclidean distance between the modulation value and the reception value of the output codeword and the prior information on the information bits.
  • Equation 3 represents forward processing
  • Equation 4 represents backward processing
  • Equation 5 represents soft output generation processing
  • Equation 3 An algorithm that improves the MAP algorithm and performs the processing shown in Equations 3 to 5 by changing the calculation of the sum by an operation that takes the maximum value is referred to as a Max-Log-MAP algorithm.
  • Equation 3 is simplified as shown in Equation 6 in the Max-Log-MAP algorithm.
  • a correction value for the result of Expression 6 is obtained by referring to the table from the path metric difference value compared in Expression 6.
  • An algorithm to be calculated is described in Non-Patent Document 3, and this is called a Log-MAP algorithm.
  • FIG. 24 is an explanatory diagram showing the order of forward processing, backward processing, and soft output generation processing performed by the MAP algorithm (or Max-Log-MAP algorithm). If the forward processing ⁇ and the backward processing ⁇ are not prepared at each time point, the soft output generation processing is impossible.
  • backward processing is performed from the end of the code trellis to generate ⁇ at each time point (time T1), and then forward processing and soft output generation processing are performed from the beginning of the trellis (time T2). ) Shows an example of a simple scheduling process.
  • FIG. 24 (b) shows an example of a scheduling process for improving this defect, dividing the trellis into blocks at the time point of size W, and performing backward processing, forward processing, and soft output generation processing for each block. .
  • backward processing is performed on the block W1.
  • backward processing is performed on the block W2 while performing forward processing and soft output generation processing on the block W1.
  • the backward process is performed on the block W3 while the forward process and the soft output generation process are performed on the block W2. Thereafter, the processing is continued in the same manner.
  • Non-Patent Document 3 describes a technique called radix-2 ⁇ n that improves this drawback.
  • radix-2 ⁇ n is a technique of reducing the number of cycles and increasing the speed by performing forward processing, backward processing, and soft output generation processing for n time points in one cycle.
  • n is an integer of 2 or more.
  • Radix-2 ⁇ n has not only the effect of speeding up, but also the effect of reducing the power consumption of the decoding device because the clock frequency can be lowered by reducing the number of cycles if the throughput is the same.
  • Non-Patent Document 3 describes simplification by approximation processing based on the Log-MAP algorithm using table references in equations 7-8.
  • FIG. 25 is an explanatory diagram showing a trellis representing transitions for two time points of time points 2t and 2 (t + 1) in the convolutional encoder with two memories shown in FIG. Time point 2t and time point 2 (t + 1) are indicated by ⁇ , and the time point (2t + 1) in between is indicated by ⁇ .
  • FIG. 26 is an explanatory diagram showing the backward processing unit 1100 in soft output decoding by radix-4 shown in equations 7-9.
  • the backward processing unit 1100 performs scheduling for storing the backward processing path metric as shown in FIG. As many path metrics as the number of states are generated for even time points.
  • the backward processing unit 1100 includes a two-time backward processor 1102 that executes the processing shown in Equation 8 and a register 1103 that temporarily stores ⁇ every two time points. At the same time, a branch metric memory 1101 for storing the branch metric ⁇ and a path metric memory 1104 are provided.
  • the branch metric memory 1101 uses the branch metrics ⁇ for two time points in one cycle of processing. ⁇ every two points stored in the register 1103 is also held in the path metric memory 1104 until the generation of the soft output is completed.
  • the register 1103 holds the number of path metrics corresponding to the number of states.
  • FIG. 27 is an explanatory diagram showing a forward processing / soft output generation unit 1200 in soft output decoding by radix-4 shown in equations 7 to 9.
  • the forward processing / soft output generation unit 1200 performs forward processing / soft output generation processing as shown in FIG. 24A in radix-4, and simultaneously executes forward processing and soft output generation processing. Is.
  • the forward processing / soft output generation unit 1200 executes a two-time forward processor 1202 that executes the processing shown in Expression 7, a register 1203 that temporarily stores ⁇ every two time points, and a processing shown in Expression 9. And a two-time soft output generation processor 1204.
  • the branch metric memory 1101 for storing the branch metric ⁇ and the path metric memory 1104 are the same as those shown in FIG.
  • the branch metric memory 1101 uses the branch metrics ⁇ for two time points in one cycle of processing.
  • the 2-time soft output generation processor 1204 generates a soft output from ⁇ and ⁇ at time t and from ⁇ at time t and t + 1.
  • Patent Document 1 describes a soft output decoding device that reduces the amount of calculation by performing in parallel the processing within the truncation length and processing that goes back beyond the truncation length.
  • Patent Document 2 describes a Viterbi decoding apparatus that reads out surviving path information from only the bank corresponding to the state having the maximum likelihood metric, performs traceback, and does not read out other unnecessary surviving path information. ing.
  • Patent Document 3 describes a soft output decoding apparatus that can reduce the number of repetitions of decoding processing by including local soft input / soft output decoding means capable of local input / output of element codes.
  • Patent Document 4 describes a soft output decoding device in which a circuit scale is reduced by providing a memory capable of simultaneously writing and reading data.
  • Non-Patent Documents 1 to 4 and Patent Documents 1 to 4 cannot be said to be sufficient in this respect, and the above-described problems cannot be solved.
  • An object of the present invention is to provide a soft output decoder capable of performing soft output generation processing in soft output decoding by radix-2 ⁇ n at high speed with a simple circuit configuration.
  • a soft output decoder is a soft output decoder that receives an encoded and transmitted bit stream, decodes the bit stream, and outputs the decoded bit stream to the outside.
  • Received value memory for storing, branch metric generating means for generating a branch metric for each time point in the trellis diagram based on the contents stored in the received value memory, and soft output including the posterior probability ratio by decoding the branch metric
  • a soft output decoding means for outputting a code
  • a hard output determining means for converting the soft output code into a hard output code not including a posteriori probability ratio and outputting the hard output code to an upper layer.
  • a backward processing unit that estimates a path metric that reaches the node, a forward processing unit that estimates a path metric that reaches each node from the top of the branch metric code trellis stored in the branch metric memory, and a backward processing
  • a path metric memory for storing the estimation processing results by the processing unit, and calculating the posterior probability ratio of the information symbols at each time point from the estimation processing results by the backward processing unit and the forward processing unit, and outputting this as a soft output code And a soft output generation processing unit.
  • the backward processing unit includes a plurality of backward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis.
  • a path metric memory is provided in each backward processor. Featuring multiple corresponding storage areas .
  • the path metric for each time point is stored in the path metric memory, and the decoding process is performed on the basis of the path metric memory. Therefore, the decoding device does not require a complicated circuit configuration, and speeding up is easy. Thus, it is possible to provide a soft output decoder having an excellent feature that a decoding process based on a code trellis can be performed at high speed with a simple circuit configuration.
  • FIG. 5 is an explanatory diagram showing a configuration of a backward processing unit in which the configuration of the backward processing unit shown in FIG. 4 is expanded for a general radix-2 ⁇ n (n is an integer of 2 or more).
  • the configurations of the forward processing unit and the soft output generation processing unit shown in FIG. 5 are expanded for the general radix-2 ⁇ n (n is an integer of 2 or more).
  • FIG. 18 is an explanatory diagram showing a more detailed configuration of the backward processing unit shown in FIG.
  • FIG. 19 is an explanatory diagram showing in more detail the configuration for the state “00” of the configuration of the first one-time backward processor shown in FIG. 18;
  • FIG. 20 is an explanatory diagram showing a more detailed configuration of the forward processing unit and the soft output generation unit shown in FIG.
  • FIG. 20 is an explanatory diagram showing the configuration of the first soft output generation processor shown in FIG. 19 in more detail.
  • FIG. 22A shows a turbo encoder
  • FIG. 22B shows a turbo decoder
  • FIG. 23A shows a convolutional encoder (number of memories 2) of the element code of FIG. 22, and
  • FIG. 23B shows a corresponding code trellis.
  • FIG. 24 (a) shows an example of a simple scheduling process
  • FIG. 24 (b) shows an example of a scheduling process in which the drawbacks are improved.
  • FIG. 25 is an explanatory diagram showing a trellis representing transitions for two time points of time points 2t and 2 (t + 1) in the convolutional encoder with two memories shown in FIG.
  • FIG. 10 is an explanatory diagram showing a backward processing unit in soft output decoding by radix-4 shown in equations 7-9.
  • FIG. 10 is a block diagram of a forward processing / soft output generation unit in soft output decoding by radix-4 shown in equations 7-9.
  • the soft output decoder 10 is a soft output decoder that decodes a bit stream and outputs the decoded bit stream to the outside.
  • This soft output decoder 10 includes received value memories 101 to 103 for storing a bit stream and branch metric generating means 105 for generating a branch metric for each time point in the trellis diagram based on the contents stored in the received value memory.
  • a soft output decoding unit 110 that decodes the branch metric and outputs a soft output code including the posterior probability ratio; and a hard output determination that converts the soft output code into a hard output code that does not include the posterior probability ratio and outputs the same to the outside Means 106.
  • the soft output decoding means stores a branch metric memory 150 that temporarily stores a branch metric, a backward processing unit 120 that estimates a path metric that reaches each node from the end of a code trellis of the stored branch metric, and A forward processing unit 130 for estimating a path metric reaching each node from the head of the code trellis of the branch metric, a path metric memory 160 for storing a result of estimation processing by the backward processing unit, a backward processing unit and a forward A soft output generation processing unit 140 that calculates a posteriori probability ratio of the information symbols at each time point from the estimation processing result by the processing unit and outputs the information symbol as a soft output code.
  • the backward processing unit is connected to each other and includes a plurality of backward processors 121 to 122 corresponding to the estimation processing for each time point of the code trellis, and the path metric memory 160 includes a plurality of backward processors corresponding to each of the backward processors.
  • Storage areas 161 to 162 are provided.
  • this backward processing unit and forward processing unit perform path metric estimation processing for a plurality of time points for one cycle.
  • the forward processing unit includes a plurality of forward processors 131 to 132 that are connected to each other and correspond to estimation processing for each time point of the code trellis.
  • the path metric memory 160 includes a plurality of storage areas 161 to 162 corresponding to each of the forward processors, and the soft output generation processing unit is based on the estimation processing result by the backward processing unit and the forward processing unit stored in the path metric memory. A posteriori probability ratio of information symbols at each time point is calculated.
  • the soft output decoder 10 can perform the decoding process based on the code trellis at high speed with a simple circuit configuration. Hereinafter, this will be described in more detail.
  • FIG. 2 is an explanatory diagram showing a configuration of a transmission system 1 including a turbo code decoder 10 which is a soft output decoder according to the first embodiment of the present invention.
  • the transmission system 1 shown in FIG. 2A includes an antenna 20 that receives radio signals by wireless communication, a demodulator 30 that extracts a bit stream from the radio signals received by the antenna 20, and an extracted bit stream.
  • the turbo code decoder 10 performs error correction and decoding.
  • the antenna 20 shown in FIG. 2A may be replaced with a wired transmission system such as ADSL. Further, as shown in FIG. 2B, the antenna 20 may be replaced with an optical pickup 20a that reads recorded data from a storage medium such as an optical disk. That is, the turbo code decoder 10 can be widely applied to all uses for decoding a bitstream.
  • the bit stream output from the demodulator 30 to the turbo code decoder 10 includes an information reception value and first and second parity reception values.
  • FIG. 1 is an explanatory diagram showing the configuration of the turbo code decoder 10 according to the present embodiment shown in FIG.
  • the turbo code decoder 10 includes an information reception value memory 101 for storing the reception value from the demodulator 30, first and second parity reception value memories 102 and 103, and an interleaver 104 for generating an interleave output from the information reception value.
  • a branch metric generation unit 105 that generates a branch metric from the information reception value and its interleaved output, the first and second parity reception values, and a soft output decoding unit 110 that decodes the branch metric and outputs a soft output code With.
  • the turbo code decoder 10 also converts the soft output code output from the soft output decoding unit 110 into a hard output code and outputs the hard output code to an upper layer, and generates a deinterleaved output from the soft output code.
  • the branch metric generation means 105 also uses this external information and its interleaved output to generate branch metrics.
  • FIG. 3 is an explanatory diagram showing a more detailed configuration of the soft output decoding unit 110 described in FIG.
  • the soft output decoding unit 110 includes a backward processing unit 120, a forward processing unit 130, a soft output generation processing unit 140, a branch metric memory 150, and a path metric memory 160.
  • the branch metric output from the branch metric generation unit 105 is stored in the branch metric memory 150 and then input to each of the backward processing unit 120, the forward processing unit 130, and the soft output generation processing unit 140.
  • the processing contents of these units will be described later.
  • the content processed by the backward processing unit 120 is stored in the path metric memory 160 and then input to the soft output generation processing unit 140.
  • the soft output generation processing unit 140 generates content that is finally output by the soft output decoding unit 110.
  • the soft output decoding unit 110 is configured to perform processing in the order of backward processing, forward processing, and soft output generation processing as described with reference to FIG. 24.
  • the path metric ⁇ may be generated by first performing forward processing and stored in the path metric memory 160, and then backward processing and soft output generation processing may be performed.
  • the path metric memory 160 has a plurality of storage areas according to the processing stage of the backward processing.
  • ⁇ (t) represents a set of ⁇ (t, s) for all states s at time t. The same applies to ⁇ (t).
  • ⁇ (t) represents a set of all branch metrics at time t.
  • FIG. 4 is an explanatory diagram showing a more detailed configuration of the backward processing unit 120 shown in FIG.
  • the backward processing unit 120 performs two-time backward processing when the present embodiment is applied to radix-4, and includes first and second one-time backward processors 121 and 122, and ⁇ every second time. Is stored temporarily.
  • the path metric memory 160 has first and second path metric storage areas 161 and 162 that hold path metrics generated by the respective one-time backward processors 121 and 122.
  • Each one-time backward processor and 122 may perform the same processing or different processing.
  • the branch metric memory 150 uses the branch metrics ⁇ for two time points in one cycle of processing. When the register 123 holds ⁇ (2 (t + 1)), the backward processing unit 120 completes the next processing in one cycle.
  • the first one-time backward processor 121 uses the equation shown in the above equation 4 using ⁇ (2 (t + 1)) stored in the register 123 and ⁇ (2t + 1) stored in the branch metric memory 150.
  • a metric ⁇ (2t + 1) is generated and stored in the first path metric storage area 161.
  • the second one-time backward processor 122 uses the intermediate path metric ⁇ (2t + 1) generated by the first one-time backward processor 121 and ⁇ (2t) stored in the branch metric memory 150 to ⁇ (2t) is generated by the processing shown, and the register 123 is updated and stored in the second path metric storage area 162.
  • FIG. 5 is an explanatory diagram showing a more detailed configuration of the forward processing unit 130 and the soft output generation processing unit 140 shown in FIG.
  • the forward processing unit 130 performs two-time point forward processing when the present embodiment is applied to radix-4, and includes first and second one-time point forward processors 131 and 132, and these one-time point forward processors 131.
  • And 132 have first and second registers 133 and 134 for temporarily storing intermediate path metrics generated.
  • the soft output generation processing unit 140 performs soft output generation processing when the present embodiment is applied to radix-4, and includes a register 141 that temporarily stores the branch metric memory output from the branch metric memory 150, a path First and second soft output generation processing is performed from the contents stored in the first and second path metric storage areas 161 and 162 of the metric memory 160 and the contents stored in the respective registers 133, 134 and 141.
  • Soft output generation processing processors 142 and 143 are Soft output generation processing when the present embodiment is applied to radix-4, and includes a register 141 that temporarily stores the branch metric memory output from the branch metric memory 150, a path First and second soft output generation processing is performed from the contents stored in the first and second path metric storage areas 161 and 162 of the metric memory 160 and the contents stored in the respective registers 133, 134 and 141.
  • the first one-time forward processing means 131 uses the intermediate path metric by ⁇ 10 (2t) held by the register 133 and ⁇ (2t) input from the branch metric memory 150 by the processing shown in the above-described Expression 10.
  • ⁇ (2t + 1) is generated and stored in the register 134.
  • the second one-time-point forward processing means 132 is based on the intermediate path metric ⁇ (2t + 1) stored in the first path metric storage area 161 in FIG. 4 and ⁇ (2t + 1) input from the branch metric memory 150.
  • ⁇ (2 (t + 1)) is generated by the processing shown in Equation 10 above, and the register 133 is updated.
  • the first and second soft output generation processing processors 142 and 143 each generate a soft output at each of the time points 2t and 2t + 1.
  • the first soft output generation processor 142 calculates the following equation 11 from ⁇ (2t) of the register 133, ⁇ (2t + 1) of the path metric memory 161, and ⁇ (2t) input from the branch metric memory 150.
  • the soft output at time 2t is generated by the processing shown in FIG.
  • the second soft output generation processor 143 calculates the above-described value from ⁇ (2t ⁇ 1) of the register 134, ⁇ (2t) of the path metric memory 161, and ⁇ (2t ⁇ 1) stored in the register 141.
  • the soft output at the time point 2t-1 is generated by the processing shown in equation (11).
  • the register 141 is updated to ⁇ (2t + 1).
  • the security policy creation method is a turbo code decoding method that decodes a bitstream encoded by a turbo code and outputs it to the outside.
  • a bit stream is stored in a reception value memory provided in advance, a branch metric for each time point in the trellis diagram is generated based on the contents stored in the reception value memory, and a code trellis is generated from the end of the branch metric code trellis.
  • the path metric reaching each node is estimated at each point of time, the result of the estimation process from the end of the code trellis is stored in a path metric memory prepared for each point in time, and the head of the branch metric code trellis Is used to estimate the path metric reaching each node at each point in time of the code trellis from the end of the code trellis and calculate the posterior probability ratio of the information symbols at each point of time from the results of the estimation process from the end and beginning of the code trellis. Is a soft output code that includes the posterior probability ratio, and the soft output code is changed to a hard output code that does not include the posterior probability ratio. To output to the outside. With this configuration and operation, the present embodiment has the following effects.
  • This embodiment can realize the soft output decoding algorithm on the code trellis based on radix-2 ⁇ n without increasing the complexity of the soft output generation process. Therefore, it is possible to increase the speed of the operation of the decoder and improve the device scale.
  • the logical depth of the soft output generation processing shown in Equation 5 is two stages of arithmetic additions, There are a total of four stages of max circuits (select one from four metrics each of which is information bits 0 and 1), and the radix-4 soft output generation processing shown in Equation 9 is three stages of arithmetic additions, max circuits There are three stages.
  • the radix-4 soft output generation process remains the two stages of arithmetic addition and the two stages of the max circuit, and the forward process and the backward process also include the two stages of arithmetic addition and the max circuit 2. It becomes a step. That is, according to the present embodiment, the maximum latency can be suppressed to about 2/3 as compared with the normal radix-4, and accordingly, the maximum clock frequency is increased and the operation speed can be increased. it can.
  • FIG. 6 is an explanatory diagram showing a configuration of the backward processing unit 120n obtained by extending the configuration of the backward processing unit 120 shown in FIG. 4 in the case of a general radix-2 ⁇ n (n is an integer of 2 or more). is there.
  • the backward processing unit 120n includes an n-time backward processing unit configured by connecting n one-time backward processors 121a, 121b,... 121n, and a register 123n for temporarily storing ⁇ every n time points.
  • the path metric memory 160 has n path metric storage areas 161a, 161b,... 161n that hold path metrics generated by the respective one-time backward processors 121a to 121n.
  • each processor When the register 123n stores ⁇ (nt), in the next step, each processor generates ⁇ (nt + 1),..., ⁇ (nt + n ⁇ 1), ⁇ (n (t + 1)) and corresponds to each. Stored in the path metric storage areas 161a to 161n. The register 123n is updated by ⁇ (n (t + 1)).
  • FIG. 7 shows a configuration of the forward processing unit 130n and the soft output generation processing unit 140 shown in FIG. 5 in the case of a general radix-2 ⁇ n (n is an integer of 2 or more) and a forward processing unit 130n and a soft processing unit 130n. It is explanatory drawing which shows the structure of the output generation process part 140n.
  • the forward processing unit 130n includes n one-time forward processors 131a, 131b,... 131n and n-1 registers 134a, 134b corresponding to the one-time forward processors 131a to 131n. ... 134 (n-1).
  • the soft output generation processing unit 140n includes n one-time soft output generation processors 142a to 142n that execute the processing shown in Equation 11 above.
  • a correction processing unit 400 is provided between the backward processor and the path metric memory, and the output data of the backward processing is transferred to the non-patent document described above. 3 is corrected based on the Log-MAP algorithm described in No. 3, and then stored in the path metric memory. As a result, the calculation amount and the memory capacity can be reduced while improving the accuracy of the soft output generation process.
  • FIG. 8 is an explanatory diagram showing the configuration of the turbo code decoder 210 according to the second embodiment of the present invention.
  • the configuration of the turbo code decoder 210 is different from the configuration of the turbo code decoder 10 according to the first embodiment described in FIG. 1 only in the internal configuration of the soft output decoding means 110. This is called decryption means 310. All other components are referred to by the same name and reference number.
  • the turbo code decoder 210 performs a decoding process on the turbo code using the Max-Log-MAP algorithm described above.
  • FIG. 9 is an explanatory diagram showing a more detailed configuration of the soft output decoding means 310 shown in FIG.
  • the configuration of the soft output decoding unit 310 differs from the configuration of the soft output decoding unit 110 according to the first embodiment described in FIG. 3 only in the internal configurations of the backward processing unit 120 and the forward processing unit 130. These are referred to as a backward processing unit 320 and a forward processing unit 330. All other components are referred to by the same name and reference number.
  • FIG. 10 is an explanatory diagram showing a more detailed configuration of the backward processing unit 320 shown in FIG.
  • the configuration of the backward processing unit 320 is different from the configuration of the backward processing unit 120 according to the first embodiment described in FIG. 4 only in the first and second one-time backward processors 121 and 122. These are referred to as first and second one-time backward processors 321 and 322, respectively. All other components are referred to by the same name and reference number.
  • FIG. 11 is an explanatory diagram showing a more detailed configuration of the first and second one-time backward processors 121 and 122 shown in FIG.
  • the register 123 holds path metrics for the respective states ⁇ (2t, 00), ⁇ (2t, 10), ⁇ (2t, 01), ⁇ (2t, 11).
  • the first one-time backward processor 321 includes eight arithmetic addition circuits 321a to 321h that add the corresponding values of ⁇ and ⁇ , and max circuits 321i to 321 that compare the numerical values and output the larger numerical value. Composed.
  • the second one-time backward processor 322 also includes eight arithmetic addition circuits 322a to 322h that add the corresponding ⁇ and ⁇ values, and max circuits 322i to 322 that compare the numerical values and output the larger numerical value. It consists of.
  • the first one-time backward processor 321 uses the aforementioned Max-Log-MAP algorithm to store ⁇ (2t ⁇ 1,00), ⁇ (2t ⁇ 1,10), ⁇ held in the branch metric memory 150. For each of (2t ⁇ 1,01), ⁇ (2t ⁇ 1,11) and each numerical value held in the register 123, the processing shown in the following Expression 12 is performed.
  • ⁇ (t, xy) represents a branch metric of the codeword bit string “xy” at time t, and is collectively referred to as ⁇ (t).
  • ⁇ (t, xy) represents the path metric of the codeword bit string “xy” at time t, and is collectively referred to as ⁇ (t).
  • the second one-time backward processor 322 uses the aforementioned Max-Log-MAP algorithm to store ⁇ (2t ⁇ 2,00), ⁇ (2t ⁇ 2,10), ⁇ held in the branch metric memory 150. For each of (2t ⁇ 2,01) and ⁇ (2t ⁇ 2,11) and each numerical value output from the first one-time backward processor 321, the process shown in the above equation 12 is also performed.
  • the path metric storage area 161 generates ⁇ (2t ⁇ 1) by the first one-time backward processor 321, and the path metric storage area 162 generates ⁇ (2t ⁇ 2) by the second one-time backward processor 322. Hold. Register 123 is also updated with ⁇ (2t ⁇ 2).
  • FIG. 12 is an explanatory diagram showing in more detail the configuration for the state “00” of the first one-time backward processor 321 shown in FIG.
  • the first one-time backward processor 321 includes a simple backward processor 410 that performs the process shown in the above-described equation 12 and a correction processing unit 400 described later.
  • the simple backward processor 410 includes the max circuit 322i and adders 322a and 322b shown in FIG. The same applies to states other than “00”.
  • the correction processing unit 400 includes a difference unit 401 that takes a difference value
  • the Log-MAP algorithm realizes the second term on the right side of Equation 14 by referring to the table based on Equation 14 shown below.
  • the table reference unit 402 refers to the table 420 stored in advance, and
  • x ⁇ y for the path metric ⁇ ′ (2t ⁇ 1,00): max (x, y) selected by the max circuit 321i. Based on
  • LUT () is a table reference function.
  • the table 420 may be stored in the first one-time backward processor 321 or may be stored in an external storage device. An output between the table reference unit 402 and the adder 403 is stored in the path metric memory 161.
  • FIG. 13 is an explanatory diagram showing a more detailed configuration of the forward processing unit 330 shown in FIG.
  • the configuration of the forward processing unit 330 is different from the configuration of the forward processing unit 130 according to the first embodiment described in FIG. 4 only in the first and second one-time forward processors 131 and 132.
  • first and second one-time forward processors 331 and 332 are referred to as first and second one-time forward processors 331 and 332. All other components are referred to by the same name and reference number.
  • FIG. 14 is an explanatory diagram showing a more detailed configuration of the first and second one-time forward processors 131 and 132 shown in FIG.
  • the register 133 holds path metrics for the respective states ⁇ (2t, 00), ⁇ (2t, 10), ⁇ (2t, 01), and ⁇ (2t, 11).
  • the first one-time forward processor 331 includes eight arithmetic addition circuits 331a to 331h that add corresponding values of ⁇ and ⁇ , and max circuits 331i to 331 that compare the numerical values and output a larger numerical value. Composed.
  • the second one-time forward processor 332 adds eight arithmetic addition circuits 332a to 332h that add the corresponding ⁇ and ⁇ values, and max circuits 332i to 332 that compare the numerical values and output the larger numerical value. It consists of.
  • the first one-time forward processor 331 uses the aforementioned Max-Log-MAP algorithm to store ⁇ (2t, 00), ⁇ (2t, 10), ⁇ (2t, 01) held in the branch metric memory 150. ), ⁇ (2t, 11) and each numerical value held in the register 133.
  • the path metric memory 161 holds ⁇ (2t + 1) generated by the one-time forward processor 331.
  • the register 133 storing ⁇ at time 2t is updated to ⁇ (2t + 2) in the next cycle.
  • the register 141 stores ⁇ (2t + 1) generated in the process.
  • FIG. 15 is a table showing the forward processing means and soft output generation means shown in FIG. 9, the information stored in each register, the data read from the memory, and the generated soft output at each time point. It is.
  • the branch metric ⁇ (1) at time 0 is used to generate ⁇ (1) and is held in the register 141.
  • L (1) is calculated using ⁇ (1) of register 141 and ⁇ (1) and ⁇ (2) of register 134.
  • the correction processing unit 400 described above can be operated in parallel with the second one-time backward processor 322, the overall processing time is not affected, and the processing speed is not deteriorated. Does not occur. The same applies to the second one-time forward processor 332.
  • the accuracy of the path metric value can be further improved, and at the same time, the output characteristics can be improved.
  • the third embodiment of the present invention further adds a path memory 670 to the first one-time forward processor, and stores an index of a state corresponding to the selected path metric. It is configured to memorize.
  • the max circuit used in the second embodiment can be replaced with a selector, so that the circuit configuration can be further simplified.
  • FIG. 16 is an explanatory diagram showing a configuration of a turbo code decoder 510 according to the third embodiment of the present invention.
  • the configuration of the turbo code decoder 510 differs from the configuration of the turbo code decoder 10 according to the first embodiment described in FIG. 1 only in the internal configuration of the soft output decoding means 110. This is called decoding means 610. All other components are referred to by the same name and reference number.
  • the turbo code decoder 510 performs a decoding process on the turbo code using the Max-Log-MAP algorithm described above.
  • FIG. 17 is an explanatory diagram showing a more detailed configuration of the soft output decoding means 610 shown in FIG.
  • the configuration of the soft output decoding unit 610 differs from the configuration of the soft output decoding unit 110 according to the first embodiment described in FIG. 3 in the internal configurations of the backward processing unit 120 and the soft output generation unit 140. These are referred to as a backward processing unit 620 and a soft output generation unit 640.
  • the forward processing unit 330 is the same as that of the second embodiment described with reference to FIG. Furthermore, a path memory 670 described later is added. All other components are referred to by the same name and reference number.
  • FIG. 18 is an explanatory diagram showing a more detailed configuration of the backward processing unit 620 shown in FIG.
  • the configuration of the backward processing unit 620 is different from the configuration of the backward processing unit 320 according to the second embodiment described above with reference to FIG. 10 only in the first one-time backward processor 321.
  • the one-time backward processor 621 All other components are referred to by the same name and reference number.
  • the first one-time backward processor 621 uses the information stored in the path memory 670 to perform backward processing using ⁇ (2 (t + 1)) to ⁇ (2t + 1) of the path metric memory 161 at the time t.
  • An intermediate metric ⁇ (2t + 1) is generated, and ⁇ (2t) and ⁇ (2t) in the register 133 are combined to generate L (2t).
  • the first one-time backward processor 621 stores, in the path memory 670, an index in a state corresponding to the selected path metric generated in this generation process together with the path metric of the previous time.
  • FIG. 19 is an explanatory diagram showing in more detail the configuration for the state “00” of the configuration of the first one-time backward processor 621 shown in FIG.
  • the first soft output generation processor 621 performs an arithmetic addition circuit 621a that arithmetically adds ⁇ (2t, 00) and ⁇ (2t, 00), and ⁇ (2t, 10).
  • An arithmetic addition circuit 621b that arithmetically adds ⁇ (2t, 11), a comparator 624i that compares outputs from these arithmetic addition circuits 621a and 621b, and a selector that switches an output according to the output from the comparator 624i 621i.
  • the first one-time backward processor 621 has the same configuration except for “00”.
  • the first one-time backward processor 621 stores this in the path memory 670.
  • FIG. 20 is an explanatory diagram showing a more detailed configuration of the forward processing unit 330 and the soft output generation unit 640 shown in FIG.
  • the forward processing unit 330 is the same as that of the second embodiment already described in FIG.
  • the configuration of the soft output generation unit 640 is different from the configuration of the soft output generation unit 340 according to the second embodiment described above with reference to FIG. This is referred to as a soft output generation processor 642. Further, a register 645 described later is included. All other components are referred to by the same name and reference number.
  • the path metric memory 670 requires a much smaller storage capacity than the path metric memory 161.
  • FIG. 21 is an explanatory diagram showing the configuration of the first soft output generation processor 642 shown in FIG. 19 in more detail.
  • the first soft output generation processor 642 passes eight arithmetic addition circuits 642a to 642h for adding the corresponding ⁇ and ⁇ values, and a path metric obtained by arithmetic addition of ⁇ (2t) and ⁇ (2t ⁇ 1). It is composed of selectors 642i to 642l that switch outputs in accordance with the stored contents of the memory 670. By combining the path memory 670 and the selectors 642i to l, the max circuits 322i to l in the second embodiment described in FIG. 11 are substituted.
  • the register 645 temporarily stores ⁇ (2 (t + 1)).
  • the first soft output generation processor 642 uses the information stored in the path memory 670 to perform backward processing using ⁇ (2 (t + 1)) to ⁇ (2t + 1) stored in the register 645 at time t.
  • An intermediate metric ⁇ (2t + 1) is generated, and L (2t) is generated using ⁇ (2t) and ⁇ (2t) stored in the register 123.
  • the path memory 670 since the path memory 670 is added, it is necessary to slightly increase the capacity of the memory as compared with the first and second embodiments described above, but instead, a soft output generation circuit is provided. It can be configured more simply.
  • a soft output decoder that receives a coded and transmitted bitstream, decodes it, and outputs it to the outside.
  • a reception value memory for storing the received bitstream; branch metric generation means for generating a branch metric for each time point in the trellis diagram based on the contents stored in the reception value memory; and decoding the branch metric Soft output decoding means for outputting a soft output code including a posterior probability ratio, and hard output determining means for converting the soft output code into a hard output code not including a posterior probability ratio and outputting the same to the outside,
  • a branch metric memory that temporarily stores the branch metric, and a backward processing unit that estimates a path metric that reaches each node from the end of a code metric of the branch metric stored in the branch metric memory.
  • a forward processing unit that estimates a path metric that reaches each node from the top of the branch metric code trellis stored in the branch metric memory, and a path metric memory that stores a result of estimation processing by the backward processing unit
  • a soft output generation processing unit that calculates a posteriori probability ratio of the information symbols at each time point from the estimation processing results by the backward processing unit and the forward processing unit and outputs the information symbol as the soft output code.
  • the backward processing unit includes a plurality of backward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis,
  • the soft output decoder wherein the path metric memory comprises a plurality of storage areas corresponding to each of the backward processors.
  • the forward processing unit includes a plurality of forward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis,
  • the path metric memory comprises a plurality of storage areas corresponding to each of the forward processors;
  • the soft output generation processing unit calculates an a posteriori probability ratio of information symbols at each time point from a result of estimation processing by the backward processing unit and the forward processing unit stored in the path metric memory.
  • the soft output decoder according to appendix 1.
  • the backward processing unit performs correction processing based on the Log-MAP algorithm for the path metric stored in the path metric memory by the backward processor that is not the last stage among the plurality of backward processors.
  • the soft output decoding unit includes a path memory in which the backward processor that is not the last stage among the plurality of backward processors stores an index indicating selection information of the path metric,
  • the soft output decoder according to appendix 4, wherein the soft output generation processing unit calculates an a posteriori probability ratio of the information symbol based on the path metric and the index.
  • the present invention can be widely applied to all uses for decoding a bit stream, such as wireless and wired communication, or an optical disc.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

Provided is a soft output decoder by which a decoding process can be performed with a simple circuit structure at high speed. A soft output decoding means (110) of a soft output decoder (10) comprises a branch metric memory (150) which temporarily stores a branch metric, a backward processing unit (120) which performs an estimation process of a path metric from the end of the code trellis of the branch metric, a forward processing unit (130) which performs an estimation process of a path metric from the head, a path metric memory (160) which stores a result of the estimation process, and a soft output generation processing unit (130) which calculates a posterior probability ratio of an information symbol at each time point from the result of the estimation process and outputs the calculated result as the soft output code, wherein the backward processing unit is provided with a plurality of backward processors (121 and 122) corresponding to each time point of the estimation process, and the path metric memory is provided with a plurality of storage regions (161 and 162) which respectively correspond to the backward processors.

Description

軟出力復号器Soft output decoder
 本発明は軟出力復号化に関し、特に復号化の際の誤り訂正処理にかかる処理による負荷の軽減に関する。 The present invention relates to soft output decoding, and more particularly, to reduction of load due to processing related to error correction processing at the time of decoding.
 誤り訂正符号化技術は、データの符号化および復号化という操作を通じてデータ伝送時の通信路上で発生したビット反転等のエラーからデータを保護する技術である。符号化は送信する情報に冗長を付加した符号語に変換する動作である。復号化はその冗長性を利用して、エラーが混入した符号語(受信語)から元の符号語(情報)を推定する動作である。この技術は、通信や記憶媒体などで既に幅広く利用されており、特に無線通信もしくはADSL(Asymmetric Digital Subscriber Line)などのような有線通信の高速化、あるいは光ディスクなどの記憶媒体の大容量化などにおいて、不可欠とされる技術である。 The error correction coding technique is a technique for protecting data from errors such as bit inversion occurring on a communication path during data transmission through operations such as data coding and decoding. Encoding is an operation of converting information to be transmitted into a codeword with redundancy added. Decoding is an operation of estimating the original codeword (information) from the codeword (received word) in which an error is mixed using the redundancy. This technology has already been widely used in communications and storage media, especially in speeding up wired communications such as wireless communications or ADSL (Asymmetric Digital Subscriber Line), or increasing the capacity of storage media such as optical disks. It is an indispensable technology.
 前述のように、復号化は元の符号語を推定する処理であり、同一の符号であっても複数の方法が考えられる。通常、復号結果は符号語またはそれを生成する情報ビット列として与えられるが、受信系列に対して各情報ビットを重み付きで推定する復号方法も知られている。これを軟出力復号法という。最適な軟出力復号は、各情報シンボルに対して符号語を構成するビット列であるという制約条件の下、符号語に対する受信系列に基づきその情報ビットもしくシンボルの条件付き確率を出力する復号である。これを事後確率復号法という。 As described above, decoding is a process of estimating an original codeword, and a plurality of methods can be considered even if the same code is used. Usually, the decoding result is given as a code word or an information bit string for generating it, but a decoding method for estimating each information bit with a weight with respect to a received sequence is also known. This is called a soft output decoding method. Optimal soft-output decoding is decoding that outputs a conditional probability of the information bit or symbol based on the received sequence for the code word under the constraint that the bit string constitutes the code word for each information symbol. . This is called posterior probability decoding.
 2値の場合には事後確率復号は次の数1で示す対数尤度比L(t)を生成すれば十分である。ここで、u(t)は時点tの情報ビット、Yは符号語に対する受信値の系列、P(u(t)=b|Y)(b=0,1)は受信系列Yの下でu(t)=bとなる条件付確率である。
Figure JPOXMLDOC01-appb-M000001
In the case of binary values, it is sufficient for the posterior probability decoding to generate the log likelihood ratio L (t) expressed by the following equation (1). Here, u (t) is an information bit at time t, Y is a sequence of received values for a codeword, and P (u (t) = b | Y) (b = 0, 1) is u under the received sequence Y. It is a conditional probability that (t) = b.
Figure JPOXMLDOC01-appb-M000001
 復号化処理で、特に状態数の小さい符号トレリスで表すことが可能な符号の復号化では、比較的計算量の小さい事後確率復号方法が知られている。この手法は、畳込み符号の最尤復号法として知られているヴィタビ復号と同様に、論理的には符号トレリス上の処理として実現できる。この手法は、非特許文献1に記載されているBCJR(Bahl, Cocke, Jelinek, Raviv)アルゴリズム、もしくはMAP(Maximum A Posteriori probability)アルゴリズムと呼ばれる。 A posterior probability decoding method with a relatively small amount of calculation is known in decoding processing, particularly in decoding of a code that can be represented by a code trellis with a small number of states. This method can be logically realized as a process on a code trellis, similarly to Viterbi decoding known as a maximum likelihood decoding method of a convolutional code. This method is called BCJR (Bahl, Cocke, Jelinek, Raviv) algorithm or MAP (Maximum A Posteriori probability) algorithm described in Non-Patent Document 1.
 MAPアルゴリズムでは、入力となる受信語Yは各ビットを0,1に判定した硬判定結果ではなく、各ビットの判定に信頼度を付けた値、つまり軟判定結果(軟入力値)を用いた復号法とすることができる。このため、この手法は軟入力軟出力復号法とも呼ばれる。この手法は、非特許文献2に記載されているターボ符号の復号における重要な要素である。以下、説明を容易とするために、シンボルがバイナリ符号である場合について説明するが、多値のシンボルで構成される符号の場合にも拡張することができる。 In the MAP algorithm, the input received word Y is not a hard decision result in which each bit is determined to be 0 or 1, but a value obtained by adding reliability to the determination of each bit, that is, a soft decision result (soft input value). It can be a decoding method. For this reason, this method is also called a soft input / soft output decoding method. This technique is an important element in the decoding of the turbo code described in Non-Patent Document 2. Hereinafter, for ease of explanation, a case where a symbol is a binary code will be described. However, the present invention can be extended to a case where a symbol is composed of multi-level symbols.
 図22は、一般的なターボ符号器900および復号器1000の構造を示す説明図である。図22の(a)に示すターボ符号器900はインターリーバ903を介してフィードバックを持つ組織的な畳込み符号化器901および902を2個並列に連接して構成する。この畳込み符号はターボ符号の要素符号と呼ばれ、メモリ数4以下の符号が通常使用され、図22はメモリ数2の場合の例を示している。 FIG. 22 is an explanatory diagram showing the structure of a general turbo encoder 900 and decoder 1000. A turbo encoder 900 shown in FIG. 22A is configured by connecting two systematic convolutional encoders 901 and 902 having feedback via an interleaver 903 in parallel. This convolutional code is called an element code of a turbo code, and a code having 4 or less memories is usually used. FIG. 22 shows an example in the case of 2 memories.
 畳込み符号化器901は、2個のメモリ901a~bと、3個の加算器901c~eで構成される。畳込み符号化器902は、2個のメモリ902a~bと、3個の加算器902c~eで構成される。畳込み符号化器901を要素符号1、902を要素符号2といい、それぞれで生成されるパリティ系列をパリティ1、パリティ2という。インターリーバ903は畳込み符号化器901および902による出力ビットを並び替える。このインターリーバ903の大きさと設計で、符号化の性能が大きく左右される。 The convolutional encoder 901 includes two memories 901a and 901b and three adders 901c and 901e. The convolutional encoder 902 is composed of two memories 902a-b and three adders 902c-e. The convolutional encoder 901 is referred to as element code 1 and 902 as element code 2, and the parity sequences generated by them are referred to as parity 1 and parity 2, respectively. Interleaver 903 rearranges the output bits from convolutional encoders 901 and 902. The size and design of the interleaver 903 greatly affects the encoding performance.
 一方、図22の(b)に示すターボ復号器1000は、ターボ符号器900から出力される要素符号に対応する軟入力軟出力復号手段1001と、各々情報系列、パリティ1、パリティ2に対応する受信値を保持するメモリ1012、1013、1014とを含む。ターボ符号の復号法は、一方の要素符号の軟出力復号で得られる軟出力値を他方の要素符号の軟入力値(事前情報)として利用し、これを繰り返すことにある。この繰り返しの過程で交換される軟出力値は、数1の値L(t)そのものではなく、L(t)から算出される数2で表される外部情報と呼ばれる値Le(t)である。
Figure JPOXMLDOC01-appb-M000002
On the other hand, a turbo decoder 1000 shown in (b) of FIG. 22 corresponds to soft input / soft output decoding means 1001 corresponding to the element code output from the turbo encoder 900, and corresponds to the information sequence, parity 1, and parity 2, respectively. And memories 1012, 1013, and 1014 that hold received values. The turbo code decoding method is to use a soft output value obtained by soft output decoding of one element code as a soft input value (preliminary information) of the other element code and repeat this. The soft output value exchanged in the process of this repetition is not the value L (t) of Equation 1 but a value Le (t) called external information represented by Equation 2 calculated from L (t). .
Figure JPOXMLDOC01-appb-M000002
 ここでy(t)は情報ビットu(t)に対する受信値、La(t)はu(t)の事前情報として用いた他方の要素符号の軟出力復号で得られた軟出力値、Cは通信路のSN比に応じて決まる定数である。インターリーバ1003およびデインターリーバ1016の処理はメモリのアドレス生成処理として実現される。 Here, y (t) is a received value for information bit u (t), La (t) is a soft output value obtained by soft output decoding of the other element code used as prior information of u (t), and C is It is a constant determined according to the SN ratio of the communication path. The processing of the interleaver 1003 and the deinterleaver 1016 is realized as memory address generation processing.
 前述のMAPアルゴリズム(BCJRアルゴリズム)について詳しく述べる。MAPアルゴリズムは最尤復号法としてよく知られたヴィタビアルゴリズムと同様、符号トレリスを利用したアルゴリズムである。畳込み符号は符号器におけるメモリの値に応じて入力情報に対する符号語が変化する。この符号器内部におけるメモリの値を符号器の「状態」という。畳込み符号による符号化は情報系列に応じて状態を変化させながら出力を行う。 The above MAP algorithm (BCJR algorithm) will be described in detail. The MAP algorithm is an algorithm using a code trellis as in the Viterbi algorithm well known as the maximum likelihood decoding method. In the convolutional code, the code word for the input information changes according to the value of the memory in the encoder. This memory value in the encoder is called the “state” of the encoder. Coding using a convolutional code is performed while changing the state according to the information sequence.
 符号トレリスはこの状態の遷移の組み合わせをグラフで表現したものであり、各時点における符号器の各状態をノードとして、遷移が存在する状態のペアにエッジをアサインすることで構成される。エッジはそれぞれの状態での入力情報で決定され、そのときの符号語がラベルとして割り当てられる。エッジの連結をパスと呼び、パスには符号語系列が対応する。 The code trellis is a graph representing a combination of transitions of this state, and is configured by assigning an edge to a pair of states where a transition exists, with each state of the encoder at each point as a node. The edge is determined by input information in each state, and the code word at that time is assigned as a label. The connection of edges is called a path, and a codeword sequence corresponds to the path.
 即ち、送出された全てのデータの状態は、その前後の状態と関連していることになる。そのため、符号トレリスを利用すれば伝送経路において発生したエラーを、より高い精度で訂正できるという特質がある。特に、時不変の畳込み符号は時点に対して不変のトレリス構造を持つ特徴があり、このため状態数が小さければ符号トレリスを利用したアルゴリズムを実行することが容易となる。 That is, the state of all transmitted data is related to the state before and after that. For this reason, if a code trellis is used, an error occurring in the transmission path can be corrected with higher accuracy. In particular, a time-invariant convolutional code has a feature that has a trellis structure that is invariant with respect to a point in time. Therefore, if the number of states is small, it is easy to execute an algorithm using a code trellis.
 図23は、図22に示したターボ符号器900のより詳細な構成を示す説明図である。 図23(a)は図22の要素符号の畳込み符号(メモリ数2)であり、図23(b)はそれに対応する符号トレリスである。初期状態はメモリ1012~1014の値を全て0とし、符号器の状態をメモリのビットを並べた系列で表現するとする。図23(a)に示す畳込み符号化器901は、前述の通り2個のメモリ901a~bと、3個の加算器901c~eで構成される。 FIG. 23 is an explanatory diagram showing a more detailed configuration of the turbo encoder 900 shown in FIG. FIG. 23A shows a convolutional code (number of memories 2) of the element code shown in FIG. 22, and FIG. 23B shows a corresponding code trellis. Assume that the initial state is that all the values of the memories 1012 to 1014 are 0, and the encoder state is expressed by a series of memory bits. The convolutional encoder 901 shown in FIG. 23A is composed of the two memories 901a and 901b and the three adders 901c and 901e as described above.
 図23(b)に示す畳込み符号では、最初の情報ビットが0であれば符号語「00」を出力して時点1で状態「00」となる。一方、情報ビットが1であれば符号語「11」を出力して時点1の状態は「10」となる。時点2では時点1の状態「00」と「11」のそれぞれに対して情報ビットの0もしくは1に応じた符号語の出力と時点2への状態遷移が行われる。 In the convolutional code shown in FIG. 23B, if the first information bit is 0, the code word “00” is output, and the state becomes “00” at time 1. On the other hand, if the information bit is 1, the code word “11” is output, and the state at time 1 becomes “10”. At time 2, for each of the states “00” and “11” at time 1, an output of a code word corresponding to 0 or 1 of the information bit and state transition to time 2 are performed.
 MAPアルゴリズムは大きく分けて、符号トレリスの先頭から各ノードへ到達するパスメトリックを算出するフォワード処理と、符号トレリスの終端から各ノードへ到達するパスメトリックを算出するバックワード処理と、これら両者の結果を用いて各時点での情報シンボルの軟出力(事後確率比)を算出する軟出力生成処理、以上の3種類の処理からなる。 The MAP algorithm is roughly divided into a forward process for calculating a path metric that reaches each node from the head of the code trellis, a backward process for calculating a path metric that reaches each node from the end of the code trellis, and the results of both. The soft output generation process for calculating the soft output (posterior probability ratio) of the information symbol at each point of time, and the above three types of processes.
 フォワード処理におけるパスメトリックは、受信系列と事前情報の下、トレリスの先頭から各ノードへ到達する確率の対数値を相対的に表す。バックワード処理におけるパスメトリックは、トレリスの末尾から各ノードへ到達する確率の対数値を相対的に表す。 The path metric in forward processing relatively represents the logarithmic value of the probability of reaching each node from the top of the trellis under the received sequence and prior information. The path metric in backward processing relatively represents the logarithmic value of the probability of reaching each node from the end of the trellis.
 以下、畳込み符号の状態の集合をS={0,1,…,|S|-1},時点t,状態sのノードにおけるフォワード処理およびバックワード処理で算出されるパスメトリックをそれぞれα(t,s),β(t,s)と定義する。また、γ(t,s,s’)で受信値と事前情報(ターボ符号の場合は他方の要素符号の軟出力)から計算される、状態sから状態s’への時点tでの遷移の確率に対応するブランチメトリックを表す。 Hereinafter, a set of convolutional code states is represented by S = {0, 1,..., | S | −1}, and path metrics calculated by forward processing and backward processing at nodes at time t and state s are represented by α ( t, s) and β (t, s). In addition, the transition at the time t from the state s to the state s ′, calculated from the received value and prior information (soft output of the other element code in the case of a turbo code) by γ (t, s, s ′) Represents a branch metric corresponding to a probability.
 これはsからs’への遷移に対応する符号語と受信値との間の尤度として計算でき、白色ガウス通信路ではγ(t,s,s’)はsからs’への遷移で出力される符号語の変調値と受信値の間のユークリッド距離および情報ビットに対する事前情報を用いて容易に計算することができる。 This can be calculated as the likelihood between the codeword corresponding to the transition from s to s ′ and the received value, and γ (t, s, s ′) is the transition from s to s ′ in the white Gaussian channel. It can be easily calculated using the Euclidean distance between the modulation value and the reception value of the output codeword and the prior information on the information bits.
 このときフォワード処理、バックワード処理および軟出力生成処理は、1時点前もしくは1時点後の値を用いて、以下の数3~5に示すように表される。数3はフォワード処理、数4はバックワード処理、数5は軟出力生成処理をそれぞれ示す。
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000005
At this time, the forward processing, backward processing, and soft output generation processing are expressed as shown in the following equations 3 to 5, using values before or after one time point. Equation 3 represents forward processing, Equation 4 represents backward processing, and Equation 5 represents soft output generation processing.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000005
 以後、本明細書の数式以外の行では、たとえばサフィックスとして「s’:τ(s’,b)=s,b=0,1」の指定されたΣを、Σ_{s,s’:τ(s,b)=s’}という形で表す。τ(s’,b)=sは状態s’から情報ビットbでsへ遷移することを表し、Σ_{s’:τ(s’,b)=s,b=0,1}は次の時点でs’となるすべての状態sに対して和をとることを表す。またΣ_{s,s’:τ(s,b)=s’}はsからs’への状態遷移における情報ビットがbとなる状態のペア{s,s’}に対して和をとることを表す。 Thereafter, in the lines other than the mathematical expressions in the present specification, for example, Σ designated as “s ′: τ (s ′, b) = s, b = 0,1” as a suffix is replaced with Σ_ {s, s ′: τ. (S, b) = s ′}. τ (s ′, b) = s represents a transition from state s ′ to s with information bit b, and Σ_ {s ′: τ (s ′, b) = s, b = 0, 1} It represents that the sum is taken over all the states s that become s ′ at the time. Also, Σ_ {s, s ′: τ (s, b) = s ′} is summed with respect to the pair of states {s, s ′} in which the information bit in the state transition from s to s ′ is b. Represents.
 このMAPアルゴリズムを改良し、数3~5に示す処理を、和の計算を最大値をとる操作によって変更することによって行うアルゴリズムを、Max-Log-MAPアルゴリズムという。これによって、expとlogへの変換が不要となり、ヴィタビアルゴリズムにおけるACS(Add-Compare-Select)処理と同様の処理で計算の処理が可能となり、計算を大幅に簡易化することができる。たとえば、前述の数3は、Max-Log-MAPアルゴリズムでは数6に示すように簡易化される。
Figure JPOXMLDOC01-appb-M000006
An algorithm that improves the MAP algorithm and performs the processing shown in Equations 3 to 5 by changing the calculation of the sum by an operation that takes the maximum value is referred to as a Max-Log-MAP algorithm. This eliminates the need for conversion to exp and log, enables calculation processing by the same processing as the ACS (Add-Compare-Select) processing in the Viterbi algorithm, and greatly simplifies the calculation. For example, Equation 3 is simplified as shown in Equation 6 in the Max-Log-MAP algorithm.
Figure JPOXMLDOC01-appb-M000006
 Max-Log-MAPアルゴリズムからMAPアルゴリズムへの近似の精度を上げて効率よく復号特性を向上させるアルゴリズムとして、数6で比較されるパスメトリックの差分値からテーブル参照によって数6の結果に対する補正値を算出するアルゴリズムが非特許文献3に記載されており、これをLog-MAPアルゴリズムという。 As an algorithm for increasing the accuracy of approximation from the Max-Log-MAP algorithm to improving the decoding characteristics efficiently, a correction value for the result of Expression 6 is obtained by referring to the table from the path metric difference value compared in Expression 6. An algorithm to be calculated is described in Non-Patent Document 3, and this is called a Log-MAP algorithm.
 図24は、MAPアルゴリズム(またはMax-Log-MAPアルゴリズム)で行われるフォワード処理、バックワード処理および軟出力生成処理の順序を示す説明図である。各時点でフォワード処理のαとバックワード処理のβとが揃っていなければ、軟出力生成処理は不可能である。図24(a)は、符号トレリスの終端からバックワード処理を行って各時点のβを生成しておいてから(時点T1)、トレリスの先頭からフォワード処理および軟出力生成処理を行う(時点T2)という、単純なスケジューリング処理の一例を示す。 FIG. 24 is an explanatory diagram showing the order of forward processing, backward processing, and soft output generation processing performed by the MAP algorithm (or Max-Log-MAP algorithm). If the forward processing α and the backward processing β are not prepared at each time point, the soft output generation processing is impossible. In FIG. 24A, backward processing is performed from the end of the code trellis to generate β at each time point (time T1), and then forward processing and soft output generation processing are performed from the beginning of the trellis (time T2). ) Shows an example of a simple scheduling process.
 ただし、図24(a)に示した例では、この処理のためにトレリス長、つまり情報長分の遅延が発生するので、高速化には向かない。また、βを情報長分保持する必要もあるため、その分だけの容量のメモリを必要とする。 However, in the example shown in FIG. 24A, a delay corresponding to the trellis length, that is, the information length occurs due to this processing, so that it is not suitable for speeding up. Further, since it is necessary to hold β for the information length, a memory having a capacity corresponding to the information length is required.
 図24(b)は、この欠点を改良し、トレリスを大きさW時点のブロックに分割し、そのブロックごとにバックワード処理、フォワード処理・軟出力生成処理を行っていくスケジューリング処理の一例を示す。時点T1ではブロックW1に対してバックワード処理を行う。時点T2ではブロックW1に対してフォワード処理および軟出力生成処理を行いつつ、ブロックW2に対してバックワード処理を行う。時点T3ではブロックW2に対してフォワード処理および軟出力生成処理を行いつつ、ブロックW3に対してバックワード処理を行う。以後同様に処理を継続する。 FIG. 24 (b) shows an example of a scheduling process for improving this defect, dividing the trellis into blocks at the time point of size W, and performing backward processing, forward processing, and soft output generation processing for each block. . At time T1, backward processing is performed on the block W1. At time T2, backward processing is performed on the block W2 while performing forward processing and soft output generation processing on the block W1. At time T3, the backward process is performed on the block W3 while the forward process and the soft output generation process are performed on the block W2. Thereafter, the processing is continued in the same manner.
 図24(b)に例示した処理では、遅延の発生およびメモリ容量をある程度削減することはできる。しかしながら、少なくとも情報長分の処理サイクル数が必要であることには変わりはない。非特許文献3には、この欠点を改良したradix-2^nと呼ばれる手法が記載されている。radix-2^nとは、1サイクルでn時点分のフォワード処理、バックワード処理および軟出力生成処理を行うことでサイクル数を削減して高速化を図るという手法である。nは2以上の整数である。 In the process illustrated in FIG. 24B, the occurrence of delay and the memory capacity can be reduced to some extent. However, the number of processing cycles corresponding to at least the information length is necessary. Non-Patent Document 3 describes a technique called radix-2 ^ n that improves this drawback. radix-2 ^ n is a technique of reducing the number of cycles and increasing the speed by performing forward processing, backward processing, and soft output generation processing for n time points in one cycle. n is an integer of 2 or more.
 radix-2^nには、高速化という効果のみならず、同一スループットであればサイクル数の削減によってクロック周波数を低くできるので、復号装置の消費電力を削減できるという効果もある。非特許文献4には、radix-4、即ちn=2時点分の処理を1サイクルで行うという例が記載されている。数7はフォワード処理、数8はバックワード処理、数9は軟出力生成処理をそれぞれ示す。
Figure JPOXMLDOC01-appb-M000007

Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000009

Figure JPOXMLDOC01-appb-I000010
Radix-2 ^ n has not only the effect of speeding up, but also the effect of reducing the power consumption of the decoding device because the clock frequency can be lowered by reducing the number of cycles if the throughput is the same. Non-Patent Document 4 describes an example in which processing for radix-4, that is, n = 2 time points, is performed in one cycle. Equation 7 represents forward processing, Equation 8 represents backward processing, and Equation 9 represents soft output generation processing.
Figure JPOXMLDOC01-appb-M000007

Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000009

Figure JPOXMLDOC01-appb-I000010
 非特許文献3では数7~8におけるテーブル参照を用いたLog-MAPアルゴリズムに基づく近似処理による簡易化について述べている。図25は図23(a)に示したメモリ数2の畳込み符号器で、時点2tおよび2(t+1)という2時点分の遷移を表すトレリスについて示す説明図である。時点2tと時点2(t+1)は●印、その間の時点(2t+1)は○印で示している。 Non-Patent Document 3 describes simplification by approximation processing based on the Log-MAP algorithm using table references in equations 7-8. FIG. 25 is an explanatory diagram showing a trellis representing transitions for two time points of time points 2t and 2 (t + 1) in the convolutional encoder with two memories shown in FIG. Time point 2t and time point 2 (t + 1) are indicated by ●, and the time point (2t + 1) in between is indicated by ○.
 図26は、数7~9に示したradix-4による軟出力復号におけるバックワード処理部1100について示す説明図である。バックワード処理部1100は、図24(a)に示したようなバックワード処理のパスメトリックをメモリに格納するスケジューリングを、radix-4にて行うものである。パスメトリックは偶数時点に対して状態の個数だけ生成される。 FIG. 26 is an explanatory diagram showing the backward processing unit 1100 in soft output decoding by radix-4 shown in equations 7-9. The backward processing unit 1100 performs scheduling for storing the backward processing path metric as shown in FIG. As many path metrics as the number of states are generated for even time points.
 バックワード処理部1100は、数8に示した処理を実行する2時点バックワードプロセッサ1102と、2時点おきのβを一時的に格納するレジスタ1103からなる。同時に、ブランチメトリックγを格納するブランチメトリックメモリ1101と、パスメトリックメモリ1104とを併設している。 The backward processing unit 1100 includes a two-time backward processor 1102 that executes the processing shown in Equation 8 and a register 1103 that temporarily stores β every two time points. At the same time, a branch metric memory 1101 for storing the branch metric γ and a path metric memory 1104 are provided.
 ブランチメトリックメモリ1101は、2時点分のブランチメトリックγを1サイクルの処理で使用する。レジスタ1103に格納されている2時点おきのβは、軟出力生成が済むまで、パスメトリックメモリ1104にも保持される。またレジスタ1103にも、状態の個数に対応する個数のパスメトリックが保持される。 The branch metric memory 1101 uses the branch metrics γ for two time points in one cycle of processing. Β every two points stored in the register 1103 is also held in the path metric memory 1104 until the generation of the soft output is completed. The register 1103 holds the number of path metrics corresponding to the number of states.
 図27は、数7~9に示したradix-4による軟出力復号におけるフォワード処理・軟出力生成部1200について示す説明図である。フォワード処理・軟出力生成部1200は、図24(a)に示したようなフォワード処理・軟出力生成処理をradix-4にて行うものであり、フォワード処理と軟出力生成処理とを同時に実行するものである。 FIG. 27 is an explanatory diagram showing a forward processing / soft output generation unit 1200 in soft output decoding by radix-4 shown in equations 7 to 9. The forward processing / soft output generation unit 1200 performs forward processing / soft output generation processing as shown in FIG. 24A in radix-4, and simultaneously executes forward processing and soft output generation processing. Is.
 フォワード処理・軟出力生成部1200は、数7に示した処理を実行する2時点フォワードプロセッサ1202と、2時点おきのαを一時的に格納するレジスタ1203と、数9に示した処理を実行する2時点軟出力生成プロセッサ1204とからなる。ブランチメトリックγを格納するブランチメトリックメモリ1101と、パスメトリックメモリ1104は、図26に示したものと同一である。 The forward processing / soft output generation unit 1200 executes a two-time forward processor 1202 that executes the processing shown in Expression 7, a register 1203 that temporarily stores α every two time points, and a processing shown in Expression 9. And a two-time soft output generation processor 1204. The branch metric memory 1101 for storing the branch metric γ and the path metric memory 1104 are the same as those shown in FIG.
 ブランチメトリックメモリ1101は、2時点分のブランチメトリックγを1サイクルの処理で使用する。また2時点軟出力生成プロセッサ1204は、時点tのαおよびβ、時点tおよびt+1のγから軟出力を生成する。 The branch metric memory 1101 uses the branch metrics γ for two time points in one cycle of processing. The 2-time soft output generation processor 1204 generates a soft output from α and β at time t and from γ at time t and t + 1.
 これに関連する技術文献として、前述の非特許文献1~4の他には、次の特許文献がある。特許文献1には、打切り長内と打切り長以上遡った処理を並行して行うことによって、計算量を削減するという軟出力復号装置が記載されている。特許文献2には、最尤メトリックを有するステートに対応するバンクのみから生残りパス情報を読出してトレースバックを行ない、それ以外の不要な生残りパス情報を読出さないというビタビ復号装置が記載されている。 In addition to the aforementioned non-patent documents 1 to 4, there are the following patent documents as technical documents related to this. Patent Document 1 describes a soft output decoding device that reduces the amount of calculation by performing in parallel the processing within the truncation length and processing that goes back beyond the truncation length. Patent Document 2 describes a Viterbi decoding apparatus that reads out surviving path information from only the bank corresponding to the state having the maximum likelihood metric, performs traceback, and does not read out other unnecessary surviving path information. ing.
 特許文献3には、要素符号の局所的な入出力が可能である局所的な軟入力軟出力復号手段を備えることによって復号処理の繰り返し回数を削減できるという軟出力復号装置が記載されている。特許文献4には、データの書き込みと読み出しとを同時に行えるメモリを備えることにより回路規模を小さくしたという軟出力復号装置が記載されている。 Patent Document 3 describes a soft output decoding apparatus that can reduce the number of repetitions of decoding processing by including local soft input / soft output decoding means capable of local input / output of element codes. Patent Document 4 describes a soft output decoding device in which a circuit scale is reduced by providing a memory capable of simultaneously writing and reading data.
再特WO99/62183号公報Re-specialized WO99 / 62183 特開2001-186025号公報JP 2001-186025 A 特開2007-006541号公報JP 2007-006541 A 特開2008-182442号公報JP 2008-182442 A
 しかしながら、符号トレリスに基づいて数7~9に示したradix-4による軟出力復号に係る復号処理を行うには、2時点分の処理を伴うため、特に数9の軟出力生成処理の回路構成が複雑になり、復号処理の高速化が困難である。radix-2^nとして、nを3以上の値として高速化を図ることはさらに困難である。 However, since the decoding process related to soft output decoding by radix-4 shown in Equations 7 to 9 based on the code trellis involves processing for two time points, the circuit configuration of the soft output generation processing of Equation 9 in particular It becomes difficult to speed up the decoding process. It is more difficult to increase the speed by setting radix-2 ^ n to a value of 3 or more.
 非特許文献1~4、および特許文献1~4として説明した技術では、この点で十分であるとは言えず、上記の問題を解決することはできない。 The technologies described as Non-Patent Documents 1 to 4 and Patent Documents 1 to 4 cannot be said to be sufficient in this respect, and the above-described problems cannot be solved.
 本発明の目的は、radix-2^nによる軟出力復号における軟出力生成処理を簡単な回路構成で高速に行うことを可能とする軟出力復号器を提供することにある。 An object of the present invention is to provide a soft output decoder capable of performing soft output generation processing in soft output decoding by radix-2 ^ n at high speed with a simple circuit configuration.
 上記目的を達成するため、本発明に係る軟出力復号器は、符号化されて送信されたビットストリームを受信して復号化して外部に出力する軟出力復号器であって、受信したビットストリームを記憶する受信値メモリと、受信値メモリに記憶された内容に基づいてトレリス線図における各時点ごとのブランチメトリックを生成するブランチメトリック生成手段と、ブランチメトリックを復号化して事後確率比を含む軟出力符号を出力する軟出力復号手段と、軟出力符号を事後確率比を含まない硬出力符号に変換して上位レイヤに出力する硬出力判定手段とを備え、軟出力復号手段が、ブランチメトリックを一時記憶するブランチメトリックメモリと、ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの終端から各ノードへ到達するパスメトリックを推定処理するバックワード処理部と、ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの先頭から各ノードへ到達するパスメトリックを推定処理するフォワード処理部と、バックワード処理部による推定処理の結果を記憶するパスメトリックメモリと、バックワード処理部およびフォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出してこれを軟出力符号として出力する軟出力生成処理部とを有すると共に、バックワード処理部が、相互に連結され符号トレリスの1時点ごとの推定処理に対応する複数個のバックワードプロセッサを備え、パスメトリックメモリが、バックワードプロセッサの各々に対応する複数の記憶域を備えることを特徴とする。 In order to achieve the above object, a soft output decoder according to the present invention is a soft output decoder that receives an encoded and transmitted bit stream, decodes the bit stream, and outputs the decoded bit stream to the outside. Received value memory for storing, branch metric generating means for generating a branch metric for each time point in the trellis diagram based on the contents stored in the received value memory, and soft output including the posterior probability ratio by decoding the branch metric A soft output decoding means for outputting a code; and a hard output determining means for converting the soft output code into a hard output code not including a posteriori probability ratio and outputting the hard output code to an upper layer. From the branch metric memory to be stored and the end of the branch metric code trellis stored in the branch metric memory, A backward processing unit that estimates a path metric that reaches the node, a forward processing unit that estimates a path metric that reaches each node from the top of the branch metric code trellis stored in the branch metric memory, and a backward processing A path metric memory for storing the estimation processing results by the processing unit, and calculating the posterior probability ratio of the information symbols at each time point from the estimation processing results by the backward processing unit and the forward processing unit, and outputting this as a soft output code And a soft output generation processing unit. The backward processing unit includes a plurality of backward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis. A path metric memory is provided in each backward processor. Featuring multiple corresponding storage areas .
 本発明は、上記したように1時点ごとのパスメトリックをパスメトリックメモリに記憶して、これに基づいて復号化処理を行うように構成したので、2段階以上の状態を推定する処理を必要とせず、そのため復号装置に複雑な回路構成を必要とせず、高速化が容易である。これによって、符号トレリスに基づく復号化処理を簡単な回路構成で高速に行うことを可能であるという、優れた特徴を持つ軟出力復号器を提供することができる。 In the present invention, as described above, the path metric for each time point is stored in the path metric memory, and the decoding process is performed on the basis of the path metric memory. Therefore, the decoding device does not require a complicated circuit configuration, and speeding up is easy. Thus, it is possible to provide a soft output decoder having an excellent feature that a decoding process based on a code trellis can be performed at high speed with a simple circuit configuration.
本発明の第1の実施形態に係る軟出力復号器(ターボ符号復号器)の構成を示す説明図である。It is explanatory drawing which shows the structure of the soft output decoder (turbo code decoder) which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係るターボ符号復号器を含む伝送系統の構成を示す説明図である。図2(a)は無線通信による伝送系統、図2(b)は光学ディスクなどの記憶媒体による伝送系統をそれぞれ示す。It is explanatory drawing which shows the structure of the transmission system containing the turbo code decoder which concerns on the 1st Embodiment of this invention. 2A shows a transmission system using wireless communication, and FIG. 2B shows a transmission system using a storage medium such as an optical disk. 図1で説明した軟出力復号手段のさらに詳しい構成を示す説明図である。It is explanatory drawing which shows the further detailed structure of the soft output decoding means demonstrated in FIG. 図3に示したバックワード処理部のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the backward process part shown in FIG. 図3に示したフォワード処理部および軟出力生成処理部のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the forward process part and soft output production | generation part which were shown in FIG. 図4に示したバックワード処理部の構成を、一般のradix-2^n(nは2以上の整数)の場合について拡張したバックワード処理部の構成を示す説明図である。FIG. 5 is an explanatory diagram showing a configuration of a backward processing unit in which the configuration of the backward processing unit shown in FIG. 4 is expanded for a general radix-2 ^ n (n is an integer of 2 or more). 図5に示したフォワード処理部および軟出力生成処理部の構成を、一般のradix-2^n(nは2以上の整数)の場合について拡張したフォワード処理部および軟出力生成処理部の構成を示す説明図である。The configurations of the forward processing unit and the soft output generation processing unit shown in FIG. 5 are expanded for the general radix-2 ^ n (n is an integer of 2 or more). It is explanatory drawing shown. 本発明の第2の実施形態に係るターボ符号復号器の構成を示す説明図である。It is explanatory drawing which shows the structure of the turbo code decoder based on the 2nd Embodiment of this invention. 図8で示した軟出力復号手段のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the soft output decoding means shown in FIG. 図9で示したバックワード処理部のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the backward process part shown in FIG. 図10で示した第1および第2の1時点バックワードプロセッサのより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the 1st and 2nd 1 time backward processor shown in FIG. 図11で示した第1の1時点バックワードプロセッサの状態「00」に対する構成をより詳しく示す説明図である。It is explanatory drawing which shows in more detail the structure with respect to the state "00" of the 1st 1 time backward processor shown in FIG. 図9で示したフォワード処理部のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the forward process part shown in FIG. 図13で示した第1および第2の1時点フォワードプロセッサのより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the 1st and 2nd 1 time point forward processor shown in FIG. 図9に示した本実施形態におけるフォワード処理手段および軟出力生成手段で、各時点においてレジスタに格納されている情報、メモリから読み出されるデータ、および生成される軟出力を示す表である。10 is a table showing information stored in a register at each time point, data read from a memory, and generated soft output in the forward processing unit and the soft output generating unit in the present embodiment shown in FIG. 9. 本発明の第3の実施形態に係るターボ符号復号器の構成を示す説明図である。It is explanatory drawing which shows the structure of the turbo code decoder based on the 3rd Embodiment of this invention. 図16で示した軟出力復号手段のより詳しい構成を示す説明図である。It is explanatory drawing which shows the more detailed structure of the soft output decoding means shown in FIG. 図18は、図17で示したバックワード処理部のより詳しい構成を示す説明図である。FIG. 18 is an explanatory diagram showing a more detailed configuration of the backward processing unit shown in FIG. 図18で示した第1の1時点バックワードプロセッサの構成の状態「00」に対する構成をより詳しく示す説明図である。FIG. 19 is an explanatory diagram showing in more detail the configuration for the state “00” of the configuration of the first one-time backward processor shown in FIG. 18; 図20は、図17で示したフォワード処理部および軟出力生成部のより詳しい構成を示す説明図である。FIG. 20 is an explanatory diagram showing a more detailed configuration of the forward processing unit and the soft output generation unit shown in FIG. 図19で示した第1の軟出力生成プロセッサの構成をさらに詳しく示す説明図である。FIG. 20 is an explanatory diagram showing the configuration of the first soft output generation processor shown in FIG. 19 in more detail. 一般的なターボ符号器および復号器の構造を示す説明図である。図22(a)はターボ符号器、(b)はターボ復号器をそれぞれ示す。It is explanatory drawing which shows the structure of a general turbo encoder and decoder. FIG. 22A shows a turbo encoder, and FIG. 22B shows a turbo decoder. 図22に示したターボ符号器のより詳細な構成を示す説明図である。図23(a)は図22の要素符号の畳込み符号器(メモリ数2)であり、図23(b)はそれに対応する符号トレリスをそれぞれ示す。It is explanatory drawing which shows the more detailed structure of the turbo encoder shown in FIG. FIG. 23A shows a convolutional encoder (number of memories 2) of the element code of FIG. 22, and FIG. 23B shows a corresponding code trellis. MAPアルゴリズム(またはMax-Log-MAPアルゴリズム)で行われるフォワード処理、バックワード処理および軟出力生成処理の順序を示す説明図である。図24(a)は単純なスケジューリング処理の一例、図24(b)はその欠点を改良したスケジューリング処理の一例を示す。It is explanatory drawing which shows the order of the forward process performed by a MAP algorithm (or Max-Log-MAP algorithm), a backward process, and a soft output production | generation process. FIG. 24 (a) shows an example of a simple scheduling process, and FIG. 24 (b) shows an example of a scheduling process in which the drawbacks are improved. 図25は図23(a)に示したメモリ数2の畳込み符号器で、時点2tおよび2(t+1)という2時点分の遷移を表すトレリスについて示す説明図である。FIG. 25 is an explanatory diagram showing a trellis representing transitions for two time points of time points 2t and 2 (t + 1) in the convolutional encoder with two memories shown in FIG. 数7~9に示したradix-4による軟出力復号におけるバックワード処理部について示す説明図である。FIG. 10 is an explanatory diagram showing a backward processing unit in soft output decoding by radix-4 shown in equations 7-9. 数7~9に示したradix-4による軟出力復号におけるフォワード処理・軟出力生成部のブロック図である。FIG. 10 is a block diagram of a forward processing / soft output generation unit in soft output decoding by radix-4 shown in equations 7-9.
 (第1の実施形態)
 以下、本発明の第1の実施形態の構成について添付図1~2に基づいて説明する。
 最初に、本実施形態の基本的な内容について説明し、その後でより具体的な内容について説明する。
 本実施形態に係る軟出力復号器10は、ビットストリームを復号化して外部に出力するする軟出力復号器である。この軟出力復号器10は、ビットストリームを記憶する受信値メモリ101~103と、受信値メモリに記憶された内容に基づいてトレリス線図における各時点ごとのブランチメトリックを生成するブランチメトリック生成手段105と、ブランチメトリックを復号化して事後確率比を含む軟出力符号を出力する軟出力復号手段110と、軟出力符号を事後確率比を含まない硬出力符号に変換して外部に出力する硬出力判定手段106とを備える。この軟出力復号手段は、ブランチメトリックを一時記憶するブランチメトリックメモリ150と、記憶されたブランチメトリックの符号トレリスの終端から各ノードへ到達するパスメトリックを推定処理するバックワード処理部120と、記憶されたブランチメトリックの符号トレリスの先頭から各ノードへ到達するパスメトリックを推定処理するフォワード処理部130と、バックワード処理部による推定処理の結果を記憶するパスメトリックメモリ160と、バックワード処理部およびフォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出してこれを軟出力符号として出力する軟出力生成処理部140とを有する。そして、バックワード処理部が、相互に連結され、符号トレリスの1時点ごとの推定処理に対応する複数個のバックワードプロセッサ121~122を備え、パスメトリックメモリ160がバックワードプロセッサの各々に対応する複数の記憶域161~162を備える。
(First embodiment)
The configuration of the first embodiment of the present invention will be described below with reference to FIGS.
First, the basic content of the present embodiment will be described, and then more specific content will be described.
The soft output decoder 10 according to the present embodiment is a soft output decoder that decodes a bit stream and outputs the decoded bit stream to the outside. This soft output decoder 10 includes received value memories 101 to 103 for storing a bit stream and branch metric generating means 105 for generating a branch metric for each time point in the trellis diagram based on the contents stored in the received value memory. A soft output decoding unit 110 that decodes the branch metric and outputs a soft output code including the posterior probability ratio; and a hard output determination that converts the soft output code into a hard output code that does not include the posterior probability ratio and outputs the same to the outside Means 106. The soft output decoding means stores a branch metric memory 150 that temporarily stores a branch metric, a backward processing unit 120 that estimates a path metric that reaches each node from the end of a code trellis of the stored branch metric, and A forward processing unit 130 for estimating a path metric reaching each node from the head of the code trellis of the branch metric, a path metric memory 160 for storing a result of estimation processing by the backward processing unit, a backward processing unit and a forward A soft output generation processing unit 140 that calculates a posteriori probability ratio of the information symbols at each time point from the estimation processing result by the processing unit and outputs the information symbol as a soft output code. The backward processing unit is connected to each other and includes a plurality of backward processors 121 to 122 corresponding to the estimation processing for each time point of the code trellis, and the path metric memory 160 includes a plurality of backward processors corresponding to each of the backward processors. Storage areas 161 to 162 are provided.
 またこのバックワード処理部およびフォワード処理部は、1サイクルに対して複数時点分のパスメトリックを推定処理する。そしてフォワード処理部は、相互に連結され、符号トレリスの1時点ごとの推定処理に対応する複数個のフォワードプロセッサ131~132を備える。さらにパスメトリックメモリ160がフォワードプロセッサの各々に対応する複数の記憶域161~162を備え、軟出力生成処理部はパスメトリックメモリに記憶されたバックワード処理部およびフォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出する。 Also, this backward processing unit and forward processing unit perform path metric estimation processing for a plurality of time points for one cycle. The forward processing unit includes a plurality of forward processors 131 to 132 that are connected to each other and correspond to estimation processing for each time point of the code trellis. Further, the path metric memory 160 includes a plurality of storage areas 161 to 162 corresponding to each of the forward processors, and the soft output generation processing unit is based on the estimation processing result by the backward processing unit and the forward processing unit stored in the path metric memory. A posteriori probability ratio of information symbols at each time point is calculated.
 以上の構成を備えることにより、軟出力復号器10は、符号トレリスに基づく復号化処理を簡単な回路構成で高速に行うことが可能となる。
 以下、これをより詳細に説明する。
With the above configuration, the soft output decoder 10 can perform the decoding process based on the code trellis at high speed with a simple circuit configuration.
Hereinafter, this will be described in more detail.
 図2は本発明の第1の実施形態に係る軟出力復号器であるターボ符号復号器10を含む伝送系統1の構成を示す説明図である。図2(a)に示す伝送系統1は、無線通信で電波信号を受信するアンテナ20と、アンテナ20が受信した電波信号からビットストリームを抽出する復調器30と、抽出されたビットストリームに対して誤り訂正と復号化の処理を行うターボ符号復号器10とからなる。 FIG. 2 is an explanatory diagram showing a configuration of a transmission system 1 including a turbo code decoder 10 which is a soft output decoder according to the first embodiment of the present invention. The transmission system 1 shown in FIG. 2A includes an antenna 20 that receives radio signals by wireless communication, a demodulator 30 that extracts a bit stream from the radio signals received by the antenna 20, and an extracted bit stream. The turbo code decoder 10 performs error correction and decoding.
 また、図2(a)に示すアンテナ20を、ADSLなどのような有線伝送系統に置き換えてもよい。さらに、図2(b)に示すように、アンテナ20を、光学ディスクなどの記憶媒体から記録データを読み出す光ピックアップ20aに置き換えてもよい。即ち、ターボ符号復号器10は、ビットストリームを復号化する用途全般に対して幅広く適用することができるものである。復調器30からターボ符号復号器10に対して出力されるビットストリームは、情報受信値と、第1および第2のパリティ受信値とが含まれる。 Further, the antenna 20 shown in FIG. 2A may be replaced with a wired transmission system such as ADSL. Further, as shown in FIG. 2B, the antenna 20 may be replaced with an optical pickup 20a that reads recorded data from a storage medium such as an optical disk. That is, the turbo code decoder 10 can be widely applied to all uses for decoding a bitstream. The bit stream output from the demodulator 30 to the turbo code decoder 10 includes an information reception value and first and second parity reception values.
 図1は、図2で示した本実施形態に係るターボ符号復号器10の構成を示す説明図である。ターボ符号復号器10は、復調器30からの受信値を記憶する情報受信値メモリ101、第1および第2のパリティ受信値メモリ102および103と、情報受信値からインターリーブ出力を生成するインターリーバ104と、情報受信値とそのインターリーブ出力と、第1および第2のパリティ受信値からブランチメトリックを生成するブランチメトリック生成手段105と、ブランチメトリックを復号化して軟出力符号を出力する軟出力復号手段110とを備える。 FIG. 1 is an explanatory diagram showing the configuration of the turbo code decoder 10 according to the present embodiment shown in FIG. The turbo code decoder 10 includes an information reception value memory 101 for storing the reception value from the demodulator 30, first and second parity reception value memories 102 and 103, and an interleaver 104 for generating an interleave output from the information reception value. A branch metric generation unit 105 that generates a branch metric from the information reception value and its interleaved output, the first and second parity reception values, and a soft output decoding unit 110 that decodes the branch metric and outputs a soft output code With.
 ターボ符号復号器10はまた、軟出力復号手段110から出力された軟出力符号を硬出力符号に変換して上位レイヤに出力する硬出力判定手段106と、この軟出力符号からデインターリーブ出力を生成するデインターリーバ107と、軟出力符号およびそのデインターリーブ出力とを外部情報として記憶する外部情報メモリ108と、外部情報からさらにインターリーブ出力を生成するインターリーバ109とを備える。ブランチメトリック生成手段105は、この外部情報とそのインターリーブ出力も、ブランチメトリックの生成に使用する。 The turbo code decoder 10 also converts the soft output code output from the soft output decoding unit 110 into a hard output code and outputs the hard output code to an upper layer, and generates a deinterleaved output from the soft output code. A deinterleaver 107, an external information memory 108 that stores the soft output code and its deinterleave output as external information, and an interleaver 109 that further generates an interleave output from the external information. The branch metric generation means 105 also uses this external information and its interleaved output to generate branch metrics.
 図3は、図1で説明した軟出力復号手段110のさらに詳しい構成を示す説明図である。軟出力復号手段110は、バックワード処理部120、フォワード処理部130、軟出力生成処理部140、ブランチメトリックメモリ150、パスメトリックメモリ160の各々を含む。 FIG. 3 is an explanatory diagram showing a more detailed configuration of the soft output decoding unit 110 described in FIG. The soft output decoding unit 110 includes a backward processing unit 120, a forward processing unit 130, a soft output generation processing unit 140, a branch metric memory 150, and a path metric memory 160.
 ブランチメトリック生成手段105から出力されたブランチメトリックは、ブランチメトリックメモリ150に記憶された上で、バックワード処理部120、フォワード処理部130、軟出力生成処理部140の各々に入力される。これら各部の処理内容については後述する。バックワード処理部120が処理した内容は、パスメトリックメモリ160に記憶され、その上で軟出力生成処理部140に入力される。軟出力生成処理部140は、最終的に軟出力復号手段110が出力する内容を生成する。 The branch metric output from the branch metric generation unit 105 is stored in the branch metric memory 150 and then input to each of the backward processing unit 120, the forward processing unit 130, and the soft output generation processing unit 140. The processing contents of these units will be described later. The content processed by the backward processing unit 120 is stored in the path metric memory 160 and then input to the soft output generation processing unit 140. The soft output generation processing unit 140 generates content that is finally output by the soft output decoding unit 110.
 軟出力復号手段110は、図24で説明したようにバックワード処理、フォワード処理、軟出力生成処理という順序で処理を行う構成を示しているが、このバックワード処理部120とフォワード処理部130とを入れ替え、先にフォワード処理を行ってパスメトリックαを生成してパスメトリックメモリ160に保持し、その後にバックワード処理と軟出力生成処理を行うようにしてもよい。 The soft output decoding unit 110 is configured to perform processing in the order of backward processing, forward processing, and soft output generation processing as described with reference to FIG. 24. The backward processing unit 120, the forward processing unit 130, The path metric α may be generated by first performing forward processing and stored in the path metric memory 160, and then backward processing and soft output generation processing may be performed.
 そして、後述するように、パスメトリックメモリ160はバックワード処理の処理段階に応じて複数の記憶域を持つ。 As will be described later, the path metric memory 160 has a plurality of storage areas according to the processing stage of the backward processing.
 以下、α(t)は時点tにおける、すべての状態sに対するα(t,s)の集合を表す。β(t)も同様である。また、γ(t)は時点tにおけるブランチメトリック全体の集合を表す。 Hereinafter, α (t) represents a set of α (t, s) for all states s at time t. The same applies to β (t). Γ (t) represents a set of all branch metrics at time t.
 図4は、図3に示したバックワード処理部120のより詳しい構成を示す説明図である。バックワード処理部120は、本実施形態をradix-4に適用した場合の2時点バックワード処理を行うものであり、第1および第2の1時点バックワードプロセッサ121および122と、2時点おきのβを一時的に格納するレジスタ123を有する。そしてパスメトリックメモリ160は、各々の1時点バックワードプロセッサ121および122で生成されるパスメトリックを保持する第1および第2のパスメトリック記憶域161および162を持つ。 FIG. 4 is an explanatory diagram showing a more detailed configuration of the backward processing unit 120 shown in FIG. The backward processing unit 120 performs two-time backward processing when the present embodiment is applied to radix-4, and includes first and second one-time backward processors 121 and 122, and β every second time. Is stored temporarily. The path metric memory 160 has first and second path metric storage areas 161 and 162 that hold path metrics generated by the respective one-time backward processors 121 and 122.
 各々の1時点バックワードプロセッサおよび122は、同一の処理を行う場合もあれば、異なる処理を行う場合もある。ブランチメトリックメモリ150は、2時点分のブランチメトリックγを1サイクルの処理で使用する。レジスタ123がβ(2(t+1))を保持している場合、バックワード処理部120は1サイクルで次の処理を完了する。 Each one-time backward processor and 122 may perform the same processing or different processing. The branch metric memory 150 uses the branch metrics γ for two time points in one cycle of processing. When the register 123 holds β (2 (t + 1)), the backward processing unit 120 completes the next processing in one cycle.
 第1の1時点バックワードプロセッサ121は、レジスタ123に記憶されたβ(2(t+1))とブランチメトリックメモリ150に記憶されたγ(2t+1)を用いて前述の数4に示した数式によって中間パスメトリックβ(2t+1)を生成し、第1のパスメトリック記憶域161に格納する。 The first one-time backward processor 121 uses the equation shown in the above equation 4 using β (2 (t + 1)) stored in the register 123 and γ (2t + 1) stored in the branch metric memory 150. A metric β (2t + 1) is generated and stored in the first path metric storage area 161.
 第2の1時点バックワードプロセッサ122は、第1の1時点バックワードプロセッサ121によって生成された中間パスメトリックβ(2t+1)とブランチメトリックメモリ150に記憶されたγ(2t)を用いて以下の数10に示す処理によってβ(2t)を生成し、レジスタ123を更新するとともに第2のパスメトリック記憶域162に格納する。
Figure JPOXMLDOC01-appb-M000011
The second one-time backward processor 122 uses the intermediate path metric β (2t + 1) generated by the first one-time backward processor 121 and γ (2t) stored in the branch metric memory 150 to Β (2t) is generated by the processing shown, and the register 123 is updated and stored in the second path metric storage area 162.
Figure JPOXMLDOC01-appb-M000011
 図5は、図3に示したフォワード処理部130および軟出力生成処理部140のより詳しい構成を示す説明図である。フォワード処理部130は、本実施形態をradix-4に適用した場合の2時点フォワード処理を行うものであり、第1および第2の1時点フォワードプロセッサ131および132と、これらの1時点フォワードプロセッサ131および132が生成した中間パスメトリックを一時保存する第1および第2のレジスタ133および134を有する。 FIG. 5 is an explanatory diagram showing a more detailed configuration of the forward processing unit 130 and the soft output generation processing unit 140 shown in FIG. The forward processing unit 130 performs two-time point forward processing when the present embodiment is applied to radix-4, and includes first and second one-time point forward processors 131 and 132, and these one-time point forward processors 131. And 132 have first and second registers 133 and 134 for temporarily storing intermediate path metrics generated.
 軟出力生成処理部140は、本実施形態をradix-4に適用した場合の軟出力生成処理を行うものであり、ブランチメトリックメモリ150から出力されるブランチメトリックメモリを一時保存するレジスタ141と、パスメトリックメモリ160の第1および第2のパスメトリック記憶域161および162に記憶された内容と各々のレジスタ133、134および141に記憶された内容とから軟出力生成処理を行う第1および第2の軟出力生成処理プロセッサ142および143とを有する。 The soft output generation processing unit 140 performs soft output generation processing when the present embodiment is applied to radix-4, and includes a register 141 that temporarily stores the branch metric memory output from the branch metric memory 150, a path First and second soft output generation processing is performed from the contents stored in the first and second path metric storage areas 161 and 162 of the metric memory 160 and the contents stored in the respective registers 133, 134 and 141. Soft output generation processing processors 142 and 143.
 レジスタ133がα(2t)を保持している場合の動作を説明する。まず第1の1時点フォワード処理手段131は、レジスタ133が保持するα(2t)と、ブランチメトリックメモリ150から入力されるγ(2t)とから、前述の数10に示した処理によって中間パスメトリックα(2t+1)を生成し、これをレジスタ134に記憶する。 The operation when the register 133 holds α (2t) will be described. First, the first one-time forward processing means 131 uses the intermediate path metric by α 10 (2t) held by the register 133 and γ (2t) input from the branch metric memory 150 by the processing shown in the above-described Expression 10. α (2t + 1) is generated and stored in the register 134.
 第2の1時点フォワード処理手段132は、図4の第1のパスメトリック記憶域161に記憶された中間パスメトリックα(2t+1)と、ブランチメトリックメモリ150から入力されるγ(2t+1)とから、前述の数10に示した処理によってα(2(t+1))を生成し、レジスタ133を更新する。 The second one-time-point forward processing means 132 is based on the intermediate path metric α (2t + 1) stored in the first path metric storage area 161 in FIG. 4 and γ (2t + 1) input from the branch metric memory 150. Α (2 (t + 1)) is generated by the processing shown in Equation 10 above, and the register 133 is updated.
 第1および第2の軟出力生成処理プロセッサ142および143は各々、時点2t、2t+1の各1時点の軟出力を生成する。第1の軟出力生成処理プロセッサ142は、レジスタ133のα(2t)と、パスメトリックメモリ161のβ(2t+1)と、ブランチメトリックメモリ150から入力されるγ(2t)とから、以下の数11に示した処理によって時点2tの軟出力を生成する。
Figure JPOXMLDOC01-appb-M000012
The first and second soft output generation processing processors 142 and 143 each generate a soft output at each of the time points 2t and 2t + 1. The first soft output generation processor 142 calculates the following equation 11 from α (2t) of the register 133, β (2t + 1) of the path metric memory 161, and γ (2t) input from the branch metric memory 150. The soft output at time 2t is generated by the processing shown in FIG.
Figure JPOXMLDOC01-appb-M000012
 第2の軟出力生成処理プロセッサ143は、レジスタ134のα(2t-1)と、パスメトリックメモリ161のβ(2t)と、レジスタ141に格納されているγ(2t-1)とから、前述の数11に示した処理によって時点2t-1の軟出力を生成する。レジスタ141はγ(2t+1)に更新される。 The second soft output generation processor 143 calculates the above-described value from α (2t−1) of the register 134, β (2t) of the path metric memory 161, and γ (2t−1) stored in the register 141. The soft output at the time point 2t-1 is generated by the processing shown in equation (11). The register 141 is updated to γ (2t + 1).
(第1の実施形態の全体的な動作)
 次に、上記の実施形態の全体的な動作について説明する。本実施形態に係るセキュリティポリシー作成方法は、ターボ符号によって符号化されたビットストリームを復号化して外部に出力するターボ符号復号化方法である。まずビットストリームを予め備えられた受信値メモリに記憶し、受信値メモリに記憶された内容に基づいてトレリス線図における各時点ごとのブランチメトリックを生成し、ブランチメトリックの符号トレリスの終端から符号トレリスの1時点ごとに各ノードへ到達するパスメトリックを推定処理し、符号トレリスの終端からの推定処理の結果を1時点ごとに予め備えられたパスメトリックメモリに記憶し、ブランチメトリックの符号トレリスの先頭からから符号トレリスの1時点ごとに各ノードへ到達するパスメトリックを推定処理して、符号トレリスの終端および先頭からの推定処理の結果から各時点での情報シンボルの事後確率比を算出してこれを事後確率比を含む軟出力符号とし、軟出力符号を事後確率比を含まない硬出力符号に変換して外部に出力する。
 この構成および動作により、本実施形態は以下のような効果を奏する。
(Overall operation of the first embodiment)
Next, the overall operation of the above embodiment will be described. The security policy creation method according to the present embodiment is a turbo code decoding method that decodes a bitstream encoded by a turbo code and outputs it to the outside. First, a bit stream is stored in a reception value memory provided in advance, a branch metric for each time point in the trellis diagram is generated based on the contents stored in the reception value memory, and a code trellis is generated from the end of the branch metric code trellis. The path metric reaching each node is estimated at each point of time, the result of the estimation process from the end of the code trellis is stored in a path metric memory prepared for each point in time, and the head of the branch metric code trellis Is used to estimate the path metric reaching each node at each point in time of the code trellis from the end of the code trellis and calculate the posterior probability ratio of the information symbols at each point of time from the results of the estimation process from the end and beginning of the code trellis. Is a soft output code that includes the posterior probability ratio, and the soft output code is changed to a hard output code that does not include the posterior probability ratio. To output to the outside.
With this configuration and operation, the present embodiment has the following effects.
 本実施形態は、radix-2^nに基づく符号トレリス上の軟出力復号アルゴリズムを、軟出力生成処理の複雑度を上げることなく実現することができる。従って、復号器の動作の高速化と、装置規模の改善を図ることができる。 This embodiment can realize the soft output decoding algorithm on the code trellis based on radix-2 ^ n without increasing the complexity of the soft output generation process. Therefore, it is possible to increase the speed of the operation of the decoder and improve the device scale.
 例えば、図2の畳込み符号で通常の1時点毎のMax-Log-MAPアルゴリズムに基づく軟出力復号であれば数5に示した軟出力生成処理の論理的な深さは算術加算2段、max回路2段(情報ビット0、1となるそれぞれ4個のメトリックから1個を選択)の計4段となり、数9に示したradix-4の軟出力生成処理は算術加算3段、max回路3段となる。 For example, in the case of soft output decoding based on the normal Max-Log-MAP algorithm for each point in time in the convolutional code of FIG. 2, the logical depth of the soft output generation processing shown in Equation 5 is two stages of arithmetic additions, There are a total of four stages of max circuits (select one from four metrics each of which is information bits 0 and 1), and the radix-4 soft output generation processing shown in Equation 9 is three stages of arithmetic additions, max circuits There are three stages.
 これに対して本実施形態を利用すれば、radix-4の軟出力生成処理は算術加算2段、max回路2段のままであり、フォワード処理およびバックワード処理も算術加算2段、max回路2段となる。つまり、本実施形態によれば、通常のradix-4と比較して最大レイテンシを2/3程度に抑制することができるので、その分最大クロック周波数を上昇させ、動作の高速化を図ることができる。 On the other hand, if this embodiment is used, the radix-4 soft output generation process remains the two stages of arithmetic addition and the two stages of the max circuit, and the forward process and the backward process also include the two stages of arithmetic addition and the max circuit 2. It becomes a step. That is, according to the present embodiment, the maximum latency can be suppressed to about 2/3 as compared with the normal radix-4, and accordingly, the maximum clock frequency is increased and the operation speed can be increased. it can.
(第1の実施形態の拡張)
 前述の第1の実施形態は、radix-4の場合についてのみ説明したが、これはradix-2^n(nは2以上の整数)の場合についても容易に拡張が可能である。
(Extension of the first embodiment)
In the first embodiment described above, only the case of radix-4 has been described. However, this can be easily extended to the case of radix-2 ^ n (n is an integer of 2 or more).
 図6は、図4に示したバックワード処理部120の構成を、一般のradix-2^n(nは2以上の整数)の場合について拡張したバックワード処理部120nの構成を示す説明図である。バックワード処理部120nは、n個の1時点バックワードプロセッサ121a、121b、…121nを連結して構成されたn時点バックワード処理手段と、n時点おきのβを一時的に格納するレジスタ123nとを有する。そしてパスメトリックメモリ160は、各々の1時点バックワードプロセッサ121a~nで生成されるパスメトリックを保持するn個のパスメトリック記憶域161a、161b、…161nを持つ。 FIG. 6 is an explanatory diagram showing a configuration of the backward processing unit 120n obtained by extending the configuration of the backward processing unit 120 shown in FIG. 4 in the case of a general radix-2 ^ n (n is an integer of 2 or more). is there. The backward processing unit 120n includes an n-time backward processing unit configured by connecting n one-time backward processors 121a, 121b,... 121n, and a register 123n for temporarily storing β every n time points. Have. The path metric memory 160 has n path metric storage areas 161a, 161b,... 161n that hold path metrics generated by the respective one-time backward processors 121a to 121n.
 レジスタ123nがβ(nt)を格納しているとき、次のステップでは各プロセッサでβ(nt+1),…,β(nt+n-1),β(n(t+1))を生成して、各々対応するパスメトリック記憶域161a~nに保持する。また、レジスタ123nはβ(n(t+1))によって更新される。 When the register 123n stores β (nt), in the next step, each processor generates β (nt + 1),..., Β (nt + n−1), β (n (t + 1)) and corresponds to each. Stored in the path metric storage areas 161a to 161n. The register 123n is updated by β (n (t + 1)).
 図7は、図5に示したフォワード処理部130および軟出力生成処理部140の構成を、一般のradix-2^n(nは2以上の整数)の場合について拡張したフォワード処理部130nおよび軟出力生成処理部140nの構成を示す説明図である。フォワード処理部130nは、連結して構成されたn個の1時点フォワードプロセッサを131a、131b、…131nと、各々の1時点フォワードプロセッサ131a~nに対応するn-1個のレジスタ134a、134b、…134(n-1)とを有する。 FIG. 7 shows a configuration of the forward processing unit 130n and the soft output generation processing unit 140 shown in FIG. 5 in the case of a general radix-2 ^ n (n is an integer of 2 or more) and a forward processing unit 130n and a soft processing unit 130n. It is explanatory drawing which shows the structure of the output generation process part 140n. The forward processing unit 130n includes n one-time forward processors 131a, 131b,... 131n and n-1 registers 134a, 134b corresponding to the one-time forward processors 131a to 131n. ... 134 (n-1).
 レジスタ133にα(nt)が記憶されている場合に、レジスタ134a~(n-1)には、α(nt-1)、α(nt-2)、…、α(nt-n+1)が格納されている。軟出力生成処理部140nは前述の数11に示した処理を実行するn個の1時点軟出力生成プロセッサ142a~nを含む。 When α (nt) is stored in the register 133, α (nt-1), α (nt-2), ..., α (nt-n + 1) are stored in the registers 134a to (n-1). Has been. The soft output generation processing unit 140n includes n one-time soft output generation processors 142a to 142n that execute the processing shown in Equation 11 above.
 以上の構成により、一般のradix-2^n(nは2以上の整数)の場合についても、前述の第1の実施形態と同じ効果を持つターボ符号復号器を得ることができる。 With the above configuration, a turbo code decoder having the same effect as that of the first embodiment can be obtained even in the case of general radix-2 ^ n (n is an integer of 2 or more).
(第2の実施形態)
 本発明の第2の実施形態は、前述した第1の実施形態に比べて、バックワードプロセッサとパスメトリックメモリとの間に補正処理部400を設け、バックワード処理の出力データを前述の非特許文献3に記載されたLog-MAPアルゴリズムに基づいて補正してからパスメトリックメモリに記憶する構成とした。このことにより、軟出力生成処理の精度を向上しつつ、計算量やメモリ容量の削減を図ることができる。
(Second Embodiment)
In the second embodiment of the present invention, compared to the first embodiment described above, a correction processing unit 400 is provided between the backward processor and the path metric memory, and the output data of the backward processing is transferred to the non-patent document described above. 3 is corrected based on the Log-MAP algorithm described in No. 3, and then stored in the path metric memory. As a result, the calculation amount and the memory capacity can be reduced while improving the accuracy of the soft output generation process.
 図8は、本発明の第2の実施形態に係るターボ符号復号器210の構成を示す説明図である。ターボ符号復号器210の構成は、図1で既述した第1の実施形態に係るターボ符号復号器10の構成と比べて、軟出力復号手段110の内部構成のみが異なるので、これを軟出力復号手段310という。これ以外の構成要素は全て同一の名称および参照番号でいう。ターボ符号復号器210は、前述したMax-Log-MAPアルゴリズムを利用したターボ符号に対して、復号化処理を行うものである。 FIG. 8 is an explanatory diagram showing the configuration of the turbo code decoder 210 according to the second embodiment of the present invention. The configuration of the turbo code decoder 210 is different from the configuration of the turbo code decoder 10 according to the first embodiment described in FIG. 1 only in the internal configuration of the soft output decoding means 110. This is called decryption means 310. All other components are referred to by the same name and reference number. The turbo code decoder 210 performs a decoding process on the turbo code using the Max-Log-MAP algorithm described above.
 図9は、図8で示した軟出力復号手段310のより詳しい構成を示す説明図である。軟出力復号手段310の構成は、図3で既述した第1の実施形態に係る軟出力復号手段110の構成と比べて、バックワード処理部120およびフォワード処理部130の内部構成のみが異なるので、これをバックワード処理部320およびフォワード処理部330という。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 9 is an explanatory diagram showing a more detailed configuration of the soft output decoding means 310 shown in FIG. The configuration of the soft output decoding unit 310 differs from the configuration of the soft output decoding unit 110 according to the first embodiment described in FIG. 3 only in the internal configurations of the backward processing unit 120 and the forward processing unit 130. These are referred to as a backward processing unit 320 and a forward processing unit 330. All other components are referred to by the same name and reference number.
 図10は、図9で示したバックワード処理部320のより詳しい構成を示す説明図である。バックワード処理部320の構成は、図4で既述した第1の実施形態に係るバックワード処理部120の構成と比べて、第1および第2の1時点バックワードプロセッサ121および122のみが異なるので、これを第1および第2の1時点バックワードプロセッサ321および322という。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 10 is an explanatory diagram showing a more detailed configuration of the backward processing unit 320 shown in FIG. The configuration of the backward processing unit 320 is different from the configuration of the backward processing unit 120 according to the first embodiment described in FIG. 4 only in the first and second one-time backward processors 121 and 122. These are referred to as first and second one-time backward processors 321 and 322, respectively. All other components are referred to by the same name and reference number.
 図11は、図10で示した第1および第2の1時点バックワードプロセッサ121および122のより詳しい構成を示す説明図である。レジスタ123は、β(2t,00),β(2t,10),β(2t,01),β(2t,11)という各々の状態に対するパスメトリックを保持している。 FIG. 11 is an explanatory diagram showing a more detailed configuration of the first and second one-time backward processors 121 and 122 shown in FIG. The register 123 holds path metrics for the respective states β (2t, 00), β (2t, 10), β (2t, 01), β (2t, 11).
 第1の1時点バックワードプロセッサ321は、対応するβとγの値を加算する8個の算術加算回路321a~hと、数値を比較して大きい方の数値を出力するmax回路321i~lとで構成される。第2の1時点バックワードプロセッサ322も同様に、対応するβとγの値を加算する8個の算術加算回路322a~hと、数値を比較して大きい方の数値を出力するmax回路322i~lとで構成される。 The first one-time backward processor 321 includes eight arithmetic addition circuits 321a to 321h that add the corresponding values of β and γ, and max circuits 321i to 321 that compare the numerical values and output the larger numerical value. Composed. Similarly, the second one-time backward processor 322 also includes eight arithmetic addition circuits 322a to 322h that add the corresponding β and γ values, and max circuits 322i to 322 that compare the numerical values and output the larger numerical value. It consists of.
 第1の1時点バックワードプロセッサ321は、前述したMax-Log-MAPアルゴリズムを利用して、ブランチメトリックメモリ150に保持されたγ(2t-1,00),γ(2t-1,10),γ(2t-1,01),γ(2t-1,11)の各々と、レジスタ123に保持された各々の数値に対して、以下の数12に示す処理を行う。 The first one-time backward processor 321 uses the aforementioned Max-Log-MAP algorithm to store γ (2t−1,00), γ (2t−1,10), γ held in the branch metric memory 150. For each of (2t−1,01), γ (2t−1,11) and each numerical value held in the register 123, the processing shown in the following Expression 12 is performed.
 ここでγ(t,xy)は時点tにおける符号語ビット列「xy」のブランチメトリックを表し、これを総称してγ(t)という。β(t,xy)も同様に、時点tにおける符号語ビット列「xy」のパスメトリックを表し、これを総称してβ(t)という。
Figure JPOXMLDOC01-appb-M000013
Here, γ (t, xy) represents a branch metric of the codeword bit string “xy” at time t, and is collectively referred to as γ (t). Similarly, β (t, xy) represents the path metric of the codeword bit string “xy” at time t, and is collectively referred to as β (t).
Figure JPOXMLDOC01-appb-M000013
 第2の1時点バックワードプロセッサ322は、前述したMax-Log-MAPアルゴリズムを利用して、ブランチメトリックメモリ150に保持されたγ(2t-2,00),γ(2t-2,10),γ(2t-2,01),γ(2t-2,11)の各々と、第1の1時点バックワードプロセッサ321から出力される各々の数値に対して、やはり上記の数12に示す処理を行う。 The second one-time backward processor 322 uses the aforementioned Max-Log-MAP algorithm to store γ (2t−2,00), γ (2t−2,10), γ held in the branch metric memory 150. For each of (2t−2,01) and γ (2t−2,11) and each numerical value output from the first one-time backward processor 321, the process shown in the above equation 12 is also performed.
 パスメトリック記憶域161は第1の1時点バックワードプロセッサ321で生成されるβ(2t-1)を、パスメトリック記憶域162は第2の1時点バックワードプロセッサ322で生成されるβ(2t-2)を保持する。レジスタ123もβ(2t-2)で更新される。 The path metric storage area 161 generates β (2t−1) by the first one-time backward processor 321, and the path metric storage area 162 generates β (2t−2) by the second one-time backward processor 322. Hold. Register 123 is also updated with β (2t−2).
 図12は、図11で示した第1の1時点バックワードプロセッサ321の状態「00」に対する構成をより詳しく示す説明図である。第1の1時点バックワードプロセッサ321は、前述の数12で示した処理を行う簡易バックワードプロセッサ410と、後述の補正処理部400とを含む。 FIG. 12 is an explanatory diagram showing in more detail the configuration for the state “00” of the first one-time backward processor 321 shown in FIG. The first one-time backward processor 321 includes a simple backward processor 410 that performs the process shown in the above-described equation 12 and a correction processing unit 400 described later.
 簡易バックワードプロセッサ410は、図11で示したmax回路322iと、加算器322a~bとを含む。「00」以外の状態についても同様である。補正処理部400は、数13で示すmax回路321iに対する入力値xとyの差分値|x-y|を取る差分手段401と、予め備えられたテーブルに基づいてこの差分値を補正するテーブル参照手段402と、max回路321iの出力とテーブル参照手段402の出力とを加算する加算器403とで構成されている。
Figure JPOXMLDOC01-appb-M000014
The simple backward processor 410 includes the max circuit 322i and adders 322a and 322b shown in FIG. The same applies to states other than “00”. The correction processing unit 400 includes a difference unit 401 that takes a difference value | xy− between the input values x and y with respect to the max circuit 321i expressed by Equation 13, and a table reference that corrects the difference value based on a previously prepared table. Means 402, and an adder 403 for adding the output of the max circuit 321i and the output of the table reference means 402.
Figure JPOXMLDOC01-appb-M000014
 Log-MAPアルゴリズムは、下に示す数14に基づき、この数14の右辺の第2項をテーブル参照によって実現する。
Figure JPOXMLDOC01-appb-M000015
The Log-MAP algorithm realizes the second term on the right side of Equation 14 by referring to the table based on Equation 14 shown below.
Figure JPOXMLDOC01-appb-M000015
 テーブル参照手段402は予め記憶されたテーブル420を参照して、max回路321iで選択されたパスメトリックβ’(2t-1,00):=max(x,y)に対して、|x-y|に基づいて数15で示す補正値を出力する。ここでLUT()はテーブル参照関数である。テーブル420は、第1の1時点バックワードプロセッサ321の内部に記憶されていても、外部の記憶装置に記憶されていてもよい。テーブル参照手段402と加算器403との間の出力が、パスメトリックメモリ161に記憶される。
Figure JPOXMLDOC01-appb-M000016
The table reference unit 402 refers to the table 420 stored in advance, and | x−y for the path metric β ′ (2t−1,00): = max (x, y) selected by the max circuit 321i. Based on |, the correction value shown in Formula 15 is output. Here, LUT () is a table reference function. The table 420 may be stored in the first one-time backward processor 321 or may be stored in an external storage device. An output between the table reference unit 402 and the adder 403 is stored in the path metric memory 161.
Figure JPOXMLDOC01-appb-M000016
 図13は、図9で示したフォワード処理部330のより詳しい構成を示す説明図である。フォワード処理部330の構成は、図4で既述した第1の実施形態に係るフォワード処理部130の構成と比べて、第1および第2の1時点フォワードプロセッサ131および132のみが異なるので、これを第1および第2の1時点フォワードプロセッサ331および332という。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 13 is an explanatory diagram showing a more detailed configuration of the forward processing unit 330 shown in FIG. The configuration of the forward processing unit 330 is different from the configuration of the forward processing unit 130 according to the first embodiment described in FIG. 4 only in the first and second one- time forward processors 131 and 132. Are referred to as first and second one- time forward processors 331 and 332. All other components are referred to by the same name and reference number.
 図14は、図13で示した第1および第2の1時点フォワードプロセッサ131および132のより詳しい構成を示す説明図である。レジスタ133は、β(2t,00),β(2t,10),β(2t,01),β(2t,11)という各々の状態に対するパスメトリックを保持している。 FIG. 14 is an explanatory diagram showing a more detailed configuration of the first and second one- time forward processors 131 and 132 shown in FIG. The register 133 holds path metrics for the respective states β (2t, 00), β (2t, 10), β (2t, 01), and β (2t, 11).
 第1の1時点フォワードプロセッサ331は、対応するβとγの値を加算する8個の算術加算回路331a~hと、数値を比較して大きい方の数値を出力するmax回路331i~lとで構成される。第2の1時点フォワードプロセッサ332も同様に、対応するβとγの値を加算する8個の算術加算回路332a~hと、数値を比較して大きい方の数値を出力するmax回路332i~lとで構成される。 The first one-time forward processor 331 includes eight arithmetic addition circuits 331a to 331h that add corresponding values of β and γ, and max circuits 331i to 331 that compare the numerical values and output a larger numerical value. Composed. Similarly, the second one-time forward processor 332 adds eight arithmetic addition circuits 332a to 332h that add the corresponding β and γ values, and max circuits 332i to 332 that compare the numerical values and output the larger numerical value. It consists of.
 第1の1時点フォワードプロセッサ331は、前述したMax-Log-MAPアルゴリズムを利用して、ブランチメトリックメモリ150に保持されたγ(2t,00),γ(2t,10),γ(2t,01),γ(2t,11)の各々と、レジスタ133に保持された各々の数値に対して処理を行う。 The first one-time forward processor 331 uses the aforementioned Max-Log-MAP algorithm to store γ (2t, 00), γ (2t, 10), γ (2t, 01) held in the branch metric memory 150. ), Γ (2t, 11) and each numerical value held in the register 133.
 パスメトリックメモリ161は1時点フォワードプロセッサ331で生成されるα(2t+1)を保持する。時点2tのαを格納しているレジスタ133は、次のサイクルでα(2t+2)に更新される。レジスタ141は、その過程で生成されるα(2t+1)を格納する。 The path metric memory 161 holds α (2t + 1) generated by the one-time forward processor 331. The register 133 storing α at time 2t is updated to α (2t + 2) in the next cycle. The register 141 stores α (2t + 1) generated in the process.
 図15は図9に示した、本実施形態におけるフォワード処理手段および軟出力生成手段で、各時点において各レジスタに格納されている情報、メモリから読み出されるデータ、および生成される軟出力を示す表である。時点0のブランチメトリックγ(1)はα(1)を生成するために使用され、レジスタ141に保持される。時点1ではレジスタ141のγ(1)とレジスタ134のα(1)およびβ(2)を使用してL(1)を計算する。 FIG. 15 is a table showing the forward processing means and soft output generation means shown in FIG. 9, the information stored in each register, the data read from the memory, and the generated soft output at each time point. It is. The branch metric γ (1) at time 0 is used to generate α (1) and is held in the register 141. At time 1, L (1) is calculated using γ (1) of register 141 and α (1) and β (2) of register 134.
 以上に示した、補正処理部400による動作は、第2の1時点バックワードプロセッサ322と並列に動作させることが可能であるため、全体の処理時間に影響を与えることはなく、処理速度の劣化は発生しない。第2の1時点フォワードプロセッサ332についても同様である。 Since the operation by the correction processing unit 400 described above can be operated in parallel with the second one-time backward processor 322, the overall processing time is not affected, and the processing speed is not deteriorated. Does not occur. The same applies to the second one-time forward processor 332.
 これによって、本実施形態ではパスメトリック値の精度をさらに向上することができ、これと同時に出力特性の改善を図ることができる。 Thus, in this embodiment, the accuracy of the path metric value can be further improved, and at the same time, the output characteristics can be improved.
(第3の実施形態)
 本発明の第3の実施形態は、前述した第2の実施形態に比べて、第1の1時点フォワードプロセッサにさらにパスメモリ670を追加し、これに選択したパスメトリックに対応する状態のインデックスを記憶するように構成している。このことにより、第2の実施形態で使用したmax回路をセレクタに置き換えることが可能となるので、回路構成をさらに簡素化できる。
(Third embodiment)
Compared with the second embodiment described above, the third embodiment of the present invention further adds a path memory 670 to the first one-time forward processor, and stores an index of a state corresponding to the selected path metric. It is configured to memorize. As a result, the max circuit used in the second embodiment can be replaced with a selector, so that the circuit configuration can be further simplified.
 図16は、本発明の第3の実施形態に係るターボ符号復号器510の構成を示す説明図である。ターボ符号復号器510の構成は、図1で既述した第1の実施形態に係るターボ符号復号器10の構成と比べて、軟出力復号手段110の内部構成のみが異なるので、これを軟出力復号手段610という。これ以外の構成要素は全て同一の名称および参照番号でいう。ターボ符号復号器510は、前述したMax-Log-MAPアルゴリズムを利用したターボ符号に対して、復号化処理を行うものである。 FIG. 16 is an explanatory diagram showing a configuration of a turbo code decoder 510 according to the third embodiment of the present invention. The configuration of the turbo code decoder 510 differs from the configuration of the turbo code decoder 10 according to the first embodiment described in FIG. 1 only in the internal configuration of the soft output decoding means 110. This is called decoding means 610. All other components are referred to by the same name and reference number. The turbo code decoder 510 performs a decoding process on the turbo code using the Max-Log-MAP algorithm described above.
 図17は、図16で示した軟出力復号手段610のより詳しい構成を示す説明図である。軟出力復号手段610の構成は、図3で既述した第1の実施形態に係る軟出力復号手段110の構成と比べて、バックワード処理部120および軟出力生成部140の内部構成が異なるので、これをバックワード処理部620および軟出力生成部640という。フォワード処理部330は、図9で既述した第2の実施形態と同一である。さらに、後述するパスメモリ670が追加されている。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 17 is an explanatory diagram showing a more detailed configuration of the soft output decoding means 610 shown in FIG. The configuration of the soft output decoding unit 610 differs from the configuration of the soft output decoding unit 110 according to the first embodiment described in FIG. 3 in the internal configurations of the backward processing unit 120 and the soft output generation unit 140. These are referred to as a backward processing unit 620 and a soft output generation unit 640. The forward processing unit 330 is the same as that of the second embodiment described with reference to FIG. Furthermore, a path memory 670 described later is added. All other components are referred to by the same name and reference number.
 図18は、図17で示したバックワード処理部620のより詳しい構成を示す説明図である。バックワード処理部620の構成は、図10で既述した第2の実施形態に係るバックワード処理部320の構成と比べて、第1の1時点バックワードプロセッサ321のみが異なるので、これを第1の1時点バックワードプロセッサ621という。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 18 is an explanatory diagram showing a more detailed configuration of the backward processing unit 620 shown in FIG. The configuration of the backward processing unit 620 is different from the configuration of the backward processing unit 320 according to the second embodiment described above with reference to FIG. 10 only in the first one-time backward processor 321. The one-time backward processor 621. All other components are referred to by the same name and reference number.
 第1の1時点バックワードプロセッサ621は、パスメモリ670に記憶された情報を利用して、時点tにおいてパスメトリックメモリ161のβ(2(t+1))からγ(2t+1)を用いてバックワード処理の中間メトリックβ(2t+1)を生成し、レジスタ133のα(2t)とγ(2t)を併せてL(2t)を生成する。第1の1時点バックワードプロセッサ621は、1時点前のパスメトリックとともにこの生成過程で発生する、選択したパスメトリックに対応する状態のインデックスをパスメモリ670に格納する。 The first one-time backward processor 621 uses the information stored in the path memory 670 to perform backward processing using β (2 (t + 1)) to γ (2t + 1) of the path metric memory 161 at the time t. An intermediate metric β (2t + 1) is generated, and α (2t) and γ (2t) in the register 133 are combined to generate L (2t). The first one-time backward processor 621 stores, in the path memory 670, an index in a state corresponding to the selected path metric generated in this generation process together with the path metric of the previous time.
 図19は、図18で示した第1の1時点バックワードプロセッサ621の構成の状態「00」に対する構成をより詳しく示す説明図である。第1の軟出力生成プロセッサ621は、状態「00」に対しては、β(2t,00)とγ(2t,00)とを算術加算する算術加算回路621aと、β(2t,10)とγ(2t,11)とを算術加算する算術加算回路621bと、これらの算術加算回路621a~bからの出力を比較する比較器624iと、比較器624iからの出力とに応じて出力を切り替えるセレクタ621iと、で構成される。第1の1時点バックワードプロセッサ621は、「00」以外の構成についてもこれと同一である。 FIG. 19 is an explanatory diagram showing in more detail the configuration for the state “00” of the configuration of the first one-time backward processor 621 shown in FIG. For the state “00”, the first soft output generation processor 621 performs an arithmetic addition circuit 621a that arithmetically adds β (2t, 00) and γ (2t, 00), and β (2t, 10). An arithmetic addition circuit 621b that arithmetically adds γ (2t, 11), a comparator 624i that compares outputs from these arithmetic addition circuits 621a and 621b, and a selector that switches an output according to the output from the comparator 624i 621i. The first one-time backward processor 621 has the same configuration except for “00”.
 Max-Log-MAPアルゴリズムのフォワード処理では、時点tにおける状態sのパスメトリックが時点(t+1)のいずれの状態のパスメトリックから決まるかの情報があれば、図10で示したmax回路をセレクタ621o~rに置き換えることが可能となる。このため、第1の1時点バックワードプロセッサ621は、パスメモリ670にこれを記憶する。 In the forward processing of the Max-Log-MAP algorithm, if there is information on which path metric of the state s at time t is determined from the path metric of which state (t + 1), the max circuit shown in FIG. It is possible to replace with ~ r. Therefore, the first one-time backward processor 621 stores this in the path memory 670.
 図20は、図17で示したフォワード処理部330および軟出力生成部640のより詳しい構成を示す説明図である。フォワード処理部330は、図11で既述した第2の実施形態と同一である。軟出力生成部640の構成は、図13で既述した第2の実施形態に係る軟出力生成部340の構成と比べて、第1の軟出力生成プロセッサ142が異なるので、これを第1の軟出力生成プロセッサ642という。さらに、後述のレジスタ645を含む。これ以外の構成要素は全て同一の名称および参照番号でいう。 FIG. 20 is an explanatory diagram showing a more detailed configuration of the forward processing unit 330 and the soft output generation unit 640 shown in FIG. The forward processing unit 330 is the same as that of the second embodiment already described in FIG. The configuration of the soft output generation unit 640 is different from the configuration of the soft output generation unit 340 according to the second embodiment described above with reference to FIG. This is referred to as a soft output generation processor 642. Further, a register 645 described later is included. All other components are referred to by the same name and reference number.
 レート1/nの畳込み符号ではこのインデックスは1状態につき1ビットで十分である。パスメトリックメモリは典型的には1状態を記憶するのに5~8ビットが必要になるため、パスメトリックメモリ161と比較して、パスメモリ670は非常に少ない記憶容量で済む。 For a rate 1 / n convolutional code, one bit per index is sufficient for this index. Since the path metric memory typically requires 5 to 8 bits to store one state, the path memory 670 requires a much smaller storage capacity than the path metric memory 161.
 図21は、図19で示した第1の軟出力生成プロセッサ642の構成をさらに詳しく示す説明図である。第1の軟出力生成プロセッサ642は、対応するβとγの値を加算する8個の算術加算回路642a~hと、β(2t)とγ(2t-1)を算術加算したパスメトリックをパスメモリ670の記憶内容に応じて出力を切り替えるセレクタ642i~lとで構成される。パスメモリ670とセレクタ642i~lとを組み合わせることによって、図11に記載した第2の実施形態におけるmax回路322i~lを代替していることになる。レジスタ645は、β(2(t+1))を一時的に格納する。 FIG. 21 is an explanatory diagram showing the configuration of the first soft output generation processor 642 shown in FIG. 19 in more detail. The first soft output generation processor 642 passes eight arithmetic addition circuits 642a to 642h for adding the corresponding β and γ values, and a path metric obtained by arithmetic addition of β (2t) and γ (2t−1). It is composed of selectors 642i to 642l that switch outputs in accordance with the stored contents of the memory 670. By combining the path memory 670 and the selectors 642i to l, the max circuits 322i to l in the second embodiment described in FIG. 11 are substituted. The register 645 temporarily stores β (2 (t + 1)).
 第1の軟出力生成プロセッサ642は、パスメモリ670に記憶された情報を用いて、時点tにおいてレジスタ645に記憶されたβ(2(t+1))からγ(2t+1)を用いてバックワード処理の中間メトリックβ(2t+1)を生成し、さらにレジスタ123に記憶されたα(2t)とγ(2t)を用いてL(2t)を生成する。 The first soft output generation processor 642 uses the information stored in the path memory 670 to perform backward processing using β (2 (t + 1)) to γ (2t + 1) stored in the register 645 at time t. An intermediate metric β (2t + 1) is generated, and L (2t) is generated using α (2t) and γ (2t) stored in the register 123.
 以上で説明した本実施形態では、パスメモリ670を追加するので、前述の第1および第2の実施形態と比べてメモリの容量をほんの少し増加させる必要はあるが、そのかわり軟出力生成回路をより簡略に構成することができる。 In the present embodiment described above, since the path memory 670 is added, it is necessary to slightly increase the capacity of the memory as compared with the first and second embodiments described above, but instead, a soft output generation circuit is provided. It can be configured more simply.
 これまで本発明について図面に示した特定の実施形態をもって説明してきたが、本発明は図面に示した実施形態に限定されるものではなく、本発明の効果を奏する限り、これまで知られたいかなる構成であっても採用することができる。 The present invention has been described with reference to the specific embodiments shown in the drawings. However, the present invention is not limited to the embodiments shown in the drawings, and any known hitherto provided that the effects of the present invention are achieved. Even if it is a structure, it is employable.
 上述した各々の実施形態について、その新規な技術内容の要点をまとめると、以下のようになる。なお、上記実施形態の一部または全部は、新規な技術として以下のようにまとめられるが、本発明は必ずしもこれに限定されるものではない。 For each of the embodiments described above, the main points of the new technical contents are summarized as follows. In addition, although part or all of the said embodiment is summarized as follows as a novel technique, this invention is not necessarily limited to this.
(付記1) 符号化されて送信されたビットストリームを受信して復号化して外部に出力する軟出力復号器であって、
 受信した前記ビットストリームを記憶する受信値メモリと、前記受信値メモリに記憶された内容に基づいてトレリス線図における各時点ごとのブランチメトリックを生成するブランチメトリック生成手段と、前記ブランチメトリックを復号化して事後確率比を含む軟出力符号を出力する軟出力復号手段と、前記軟出力符号を事後確率比を含まない硬出力符号に変換して外部に出力する硬出力判定手段とを備え、
 前記軟出力復号手段が、前記ブランチメトリックを一時記憶するブランチメトリックメモリと、前記ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの終端から各ノードへ到達するパスメトリックを推定処理するバックワード処理部と、前記ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの先頭から各ノードへ到達するパスメトリックを推定処理するフォワード処理部と、前記バックワード処理部による推定処理の結果を記憶するパスメトリックメモリと、前記バックワード処理部および前記フォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出してこれを前記軟出力符号として出力する軟出力生成処理部とを有すると共に、
 前記バックワード処理部が、相互に連結され前記符号トレリスの1時点ごとの前記推定処理に対応する複数個のバックワードプロセッサを備え、
 前記パスメトリックメモリが、前記バックワードプロセッサの各々に対応する複数の記憶域を備えることを特徴とする軟出力復号器。
(Supplementary Note 1) A soft output decoder that receives a coded and transmitted bitstream, decodes it, and outputs it to the outside.
A reception value memory for storing the received bitstream; branch metric generation means for generating a branch metric for each time point in the trellis diagram based on the contents stored in the reception value memory; and decoding the branch metric Soft output decoding means for outputting a soft output code including a posterior probability ratio, and hard output determining means for converting the soft output code into a hard output code not including a posterior probability ratio and outputting the same to the outside,
A branch metric memory that temporarily stores the branch metric, and a backward processing unit that estimates a path metric that reaches each node from the end of a code metric of the branch metric stored in the branch metric memory. A forward processing unit that estimates a path metric that reaches each node from the top of the branch metric code trellis stored in the branch metric memory, and a path metric memory that stores a result of estimation processing by the backward processing unit And a soft output generation processing unit that calculates a posteriori probability ratio of the information symbols at each time point from the estimation processing results by the backward processing unit and the forward processing unit and outputs the information symbol as the soft output code. ,
The backward processing unit includes a plurality of backward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis,
The soft output decoder, wherein the path metric memory comprises a plurality of storage areas corresponding to each of the backward processors.
(付記2) 前記バックワード処理部および前記フォワード処理部が、1サイクルに対して複数時点分の前記パスメトリックを推定処理することを特徴とする、付記1に記載の軟出力復号器。 (Supplementary note 2) The soft output decoder according to supplementary note 1, wherein the backward processing unit and the forward processing unit perform estimation processing of the path metrics for a plurality of time points for one cycle.
(付記3) 前記フォワード処理部が、相互に連結され前記符号トレリスの1時点ごとの前記推定処理に対応する複数個のフォワードプロセッサを備え、
 前記パスメトリックメモリが、前記フォワードプロセッサの各々に対応する複数の記憶域を備え、
 前記軟出力生成処理部は、前記パスメトリックメモリに記憶された前記バックワード処理部および前記フォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出することを特徴とする、付記1に記載の軟出力復号器。
(Supplementary Note 3) The forward processing unit includes a plurality of forward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis,
The path metric memory comprises a plurality of storage areas corresponding to each of the forward processors;
The soft output generation processing unit calculates an a posteriori probability ratio of information symbols at each time point from a result of estimation processing by the backward processing unit and the forward processing unit stored in the path metric memory. The soft output decoder according to appendix 1.
(付記4) 前記バックワード処理部は、複数の前記バックワードプロセッサの中で最後段ではない前記バックワードプロセッサが前記パスメトリックメモリに記憶させるパスメトリックに対してLog-MAPアルゴリズムに基づいて補正処理を行う補正処理部を備えることを特徴とする、付記1に記載の軟出力復号器。 (Supplementary Note 4) The backward processing unit performs correction processing based on the Log-MAP algorithm for the path metric stored in the path metric memory by the backward processor that is not the last stage among the plurality of backward processors. The soft output decoder according to claim 1, further comprising a correction processing unit.
(付記5) 前記軟出力復号手段は、複数の前記バックワードプロセッサの中で最後段ではない前記バックワードプロセッサが前記パスメトリックの選択情報を示すインデックスを記憶させるパスメモリを備え、
 前記軟出力生成処理部は、前記パスメトリックと前記インデックスとに基づいて前記情報シンボルの事後確率比を算出することを特徴とする、付記4に記載の軟出力復号器。
(Supplementary Note 5) The soft output decoding unit includes a path memory in which the backward processor that is not the last stage among the plurality of backward processors stores an index indicating selection information of the path metric,
The soft output decoder according to appendix 4, wherein the soft output generation processing unit calculates an a posteriori probability ratio of the information symbol based on the path metric and the index.
 以上、実施形態(及び実施例)を参照して本願発明を説明したが、本願発明は上記実施形態(及び実施例)に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 As mentioned above, although this invention was demonstrated with reference to embodiment (and an Example), this invention is not limited to the said embodiment (and Example). Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は2009年10月22日に出願された日本出願特願2009-242911を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2009-242911 filed on October 22, 2009, the entire disclosure of which is incorporated herein.
 本発明は、たとえば無線および有線通信、もしくは光ディスクなど、ビットストリームを復号化する用途全般に対して幅広く適用することができる。 The present invention can be widely applied to all uses for decoding a bit stream, such as wireless and wired communication, or an optical disc.
  1 伝送系統
  10、210、510 ターボ符号復号器
  20 アンテナ
  20a 光ピックアップ
  30 復調器
  101 情報受信値メモリ
  102、103 パリティ受信値メモリ
  104、109 インターリーバ
  105 ブランチメトリック生成手段
  106 硬出力判定手段
  107 デインターリーバ
  108 外部情報メモリ
  110、310、610 軟出力復号手段
  120、120n、320、620 バックワード処理部
  121、122、121a、121b、121n、321、322、621 バックワードプロセッサ
  123、133、134、141、123n、134a、134b、134(n-1)、645 レジスタ
  130、130n、330 フォワード処理部
  131、132、131a、131b、131n、331、332 フォワードプロセッサ
  140、640 軟出力生成処理部
  142、143、142a、142b、142n、642 軟出力生成プロセッサ
  150 ブランチメトリックメモリ
  160 パスメトリックメモリ
  161、162、161a、161b、161n パスメトリック記憶域
  321a、321h、322a、322h、331a、331h、332a、332h、621a、621b 算術加算回路
  321i、321l、322i、322l、331i、331l、332i、332l max回路
  400 補正処理部
  401 差分手段
  402 テーブル参照手段
  403 加算器
  410 簡易バックワードプロセッサ
  420 テーブル
  621i、621o、621r セレクタ
  624i 比較器
  670 パスメモリ
1 Transmission system 10, 210, 510 Turbo code decoder 20 Antenna 20a Optical pickup 30 Demodulator 101 Information reception value memory 102, 103 Parity reception value memory 104, 109 Interleaver 105 Branch metric generation means 106 Hard output determination means 107 Deinterlacing Lever 108 External information memory 110, 310, 610 Soft output decoding means 120, 120n, 320, 620 Backward processing units 121, 122, 121a, 121b, 121n, 321, 322, 621 Backward processors 123, 133, 134, 141, 123n, 134a, 134b, 134 (n-1), 645 registers 130, 130n, 330 Forward processing units 131, 132, 131a, 131b, 131n, 331, 3 2 Forward processor 140, 640 Soft output generation processing unit 142, 143, 142a, 142b, 142n, 642 Soft output generation processor 150 Branch metric memory 160 Path metric memory 161, 162, 161a, 161b, 161n Path metric storage area 321a, 321h 322a, 322h, 331a, 331h, 332a, 332h, 621a, 621b Arithmetic addition circuit 321i, 321l, 322i, 322l, 331i, 331l, 332i, 332l max circuit 400 Correction processing unit 401 Difference means 402 Table reference means 403 Adder 410 Simple backward processor 420 Table 621i, 621o, 621r Selector 624i Comparator 670 Path memory

Claims (5)

  1.  符号化されて送信されたビットストリームを受信して復号化して外部に出力する軟出力復号器であって、
     受信した前記ビットストリームを記憶する受信値メモリと、前記受信値メモリに記憶された内容に基づいてトレリス線図における各時点ごとのブランチメトリックを生成するブランチメトリック生成手段と、前記ブランチメトリックを復号化して事後確率比を含む軟出力符号を出力する軟出力復号手段と、前記軟出力符号を事後確率比を含まない硬出力符号に変換して外部に出力する硬出力判定手段とを備え、
     前記軟出力復号手段が、前記ブランチメトリックを一時記憶するブランチメトリックメモリと、前記ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの終端から各ノードへ到達するパスメトリックを推定処理するバックワード処理部と、前記ブランチメトリックメモリに記憶されたブランチメトリックの符号トレリスの先頭から各ノードへ到達するパスメトリックを推定処理するフォワード処理部と、前記バックワード処理部による推定処理の結果を記憶するパスメトリックメモリと、前記バックワード処理部および前記フォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出してこれを前記軟出力符号として出力する軟出力生成処理部とを有すると共に、
     前記バックワード処理部が、相互に連結され前記符号トレリスの1時点ごとの前記推定処理に対応する複数個のバックワードプロセッサを備え、
     前記パスメトリックメモリが、前記バックワードプロセッサの各々に対応する複数の記憶域を備えることを特徴とする軟出力復号器。
    A soft output decoder that receives a coded and transmitted bitstream, decodes it, and outputs it to the outside.
    A reception value memory for storing the received bitstream; branch metric generation means for generating a branch metric for each time point in the trellis diagram based on the contents stored in the reception value memory; and decoding the branch metric Soft output decoding means for outputting a soft output code including a posterior probability ratio, and hard output determining means for converting the soft output code into a hard output code not including a posterior probability ratio and outputting the same to the outside,
    A branch metric memory that temporarily stores the branch metric, and a backward processing unit that estimates a path metric that reaches each node from the end of a code metric of the branch metric stored in the branch metric memory. A forward processing unit that estimates a path metric that reaches each node from the top of the branch metric code trellis stored in the branch metric memory, and a path metric memory that stores a result of estimation processing by the backward processing unit And a soft output generation processing unit that calculates a posteriori probability ratio of information symbols at each time point from the estimation processing results by the backward processing unit and the forward processing unit and outputs the information symbol as the soft output code. ,
    The backward processing unit includes a plurality of backward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis,
    A soft output decoder, wherein the path metric memory comprises a plurality of storage areas corresponding to each of the backward processors.
  2.  前記バックワード処理部および前記フォワード処理部が、1サイクルに対して複数時点分の前記パスメトリックを推定処理することを特徴とする、請求項1に記載の軟出力復号器。 The soft output decoder according to claim 1, wherein the backward processing unit and the forward processing unit perform the estimation processing of the path metrics for a plurality of time points for one cycle.
  3.  前記フォワード処理部が、相互に連結され前記符号トレリスの1時点ごとの前記推定処理に対応する複数個のフォワードプロセッサを備え、
     前記パスメトリックメモリが、前記フォワードプロセッサの各々に対応する複数の記憶域を備え、
     前記軟出力生成処理部は、前記パスメトリックメモリに記憶された前記バックワード処理部および前記フォワード処理部による推定処理の結果から各時点での情報シンボルの事後確率比を算出することを特徴とする、請求項1に記載の軟出力復号器。
    The forward processing unit includes a plurality of forward processors connected to each other and corresponding to the estimation processing for each time point of the code trellis;
    The path metric memory comprises a plurality of storage areas corresponding to each of the forward processors;
    The soft output generation processing unit calculates a posteriori probability ratio of information symbols at each time point from a result of estimation processing by the backward processing unit and the forward processing unit stored in the path metric memory. The soft-output decoder according to claim 1.
  4.  前記バックワード処理部は、複数の前記バックワードプロセッサの中で最後段ではない前記バックワードプロセッサが前記パスメトリックメモリに記憶させるパスメトリックに対してLog-MAPアルゴリズムに基づいて補正処理を行う補正処理部を備えることを特徴とする、請求項1に記載の軟出力復号器。 The backward processing unit includes a correction processing unit that performs a correction process based on a Log-MAP algorithm for a path metric stored in the path metric memory by the backward processor that is not the last stage among the plurality of backward processors. The soft output decoder according to claim 1, comprising:
  5.  前記軟出力復号手段は、複数の前記バックワードプロセッサの中で最後段ではない前記バックワードプロセッサが前記パスメトリックの選択情報を示すインデックスを記憶させるパスメモリを備え、
     前記軟出力生成処理部は、前記パスメトリックと前記インデックスとに基づいて前記情報シンボルの事後確率比を算出することを特徴とする、請求項4に記載の軟出力復号器。
    The soft output decoding means includes a path memory for storing an index indicating selection information of the path metric by the backward processor that is not the last stage among the plurality of backward processors,
    The soft output decoder according to claim 4, wherein the soft output generation processing unit calculates a posterior probability ratio of the information symbol based on the path metric and the index.
PCT/JP2010/068021 2009-10-22 2010-10-14 Soft output decoder WO2011048997A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009242911 2009-10-22
JP2009-242911 2009-10-22

Publications (1)

Publication Number Publication Date
WO2011048997A1 true WO2011048997A1 (en) 2011-04-28

Family

ID=43900224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/068021 WO2011048997A1 (en) 2009-10-22 2010-10-14 Soft output decoder

Country Status (1)

Country Link
WO (1) WO2011048997A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111211792A (en) * 2018-11-22 2020-05-29 北京松果电子有限公司 Turbo decoding method, device and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002208219A (en) * 2000-11-01 2002-07-26 Samsung Electronics Co Ltd High double speed viterbi detector for optical disk
JP2003289253A (en) * 2002-03-28 2003-10-10 Mitsubishi Electric Corp Error correcting decoder
JP2004096747A (en) * 2002-08-30 2004-03-25 Lucent Technol Inc Logmap processor with higher order radix

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002208219A (en) * 2000-11-01 2002-07-26 Samsung Electronics Co Ltd High double speed viterbi detector for optical disk
JP2003289253A (en) * 2002-03-28 2003-10-10 Mitsubishi Electric Corp Error correcting decoder
JP2004096747A (en) * 2002-08-30 2004-03-25 Lucent Technol Inc Logmap processor with higher order radix

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HSIANG-TSUNG CHUANG ET AL.: "A High-Troughput Radix-4 log-MAP Decoder With Low Complexity LLR Architecture", PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON VLSI DESIGN, AUTOMATION AND TEST, 2009. VLSI-DAT '09, 28 April 2009 (2009-04-28), pages 231 - 234, XP031485271 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111211792A (en) * 2018-11-22 2020-05-29 北京松果电子有限公司 Turbo decoding method, device and system
CN111211792B (en) * 2018-11-22 2023-05-30 北京小米松果电子有限公司 Turbo decoding method, device and system

Similar Documents

Publication Publication Date Title
JP3861084B2 (en) Hybrid turbo / convolutional code decoder, especially for mobile radio systems
US7549113B2 (en) Turbo decoder, turbo decoding method, and operating program of same
JP4432781B2 (en) Error correction decoder
US7246298B2 (en) Unified viterbi/turbo decoder for mobile communication systems
JP4907802B2 (en) Butterfly processor device used for communication decoding
JP5700035B2 (en) Error correction code decoding apparatus, error correction code decoding method, and error correction code decoding program
US8196006B2 (en) Modified branch metric calculator to reduce interleaver memory and improve performance in a fixed-point turbo decoder
US6487694B1 (en) Method and apparatus for turbo-code decoding a convolution encoded data frame using symbol-by-symbol traceback and HR-SOVA
KR101051933B1 (en) Metric Computation for Map Decoding Using Trellis' Butterfly Structure
KR20110037953A (en) Methods and apparatus for programmable decoding of a plurality of code types
EP2339757A1 (en) Power-reduced preliminary decoded bits in viterbi decoder
JP3540224B2 (en) Turbo decoder, turbo decoding method, and storage medium storing the method
KR100628201B1 (en) Method for Turbo Decoding
US7725798B2 (en) Method for recovering information from channel-coded data streams
Lin et al. Low power soft output Viterbi decoder scheme for turbo code decoding
WO2011048997A1 (en) Soft output decoder
Cholan Design and implementation of low power high speed Viterbi decoder
JP2002076921A (en) Method and apparatus for error correction code decoding
US20050172203A1 (en) Decoder and method for performing decoding operation using map algorithm in mobile communication system
Weithoffer et al. Bit-level pipelining for highly parallel turbo-code decoders: A critical assessment
US7120851B2 (en) Recursive decoder for switching between normalized and non-normalized probability estimates
JP7144621B2 (en) Communication system and communication method
US8914716B2 (en) Resource sharing in decoder architectures
JP2001285079A (en) Device for encoding/decoding error correction for communication
JP4525658B2 (en) Error correction code decoding apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10824846

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 10824846

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP