KR20140128605A - Method of parallel calculation for turbo decoder and apparatus for performing the same - Google Patents

Method of parallel calculation for turbo decoder and apparatus for performing the same Download PDF

Info

Publication number
KR20140128605A
KR20140128605A KR1020130047218A KR20130047218A KR20140128605A KR 20140128605 A KR20140128605 A KR 20140128605A KR 1020130047218 A KR1020130047218 A KR 1020130047218A KR 20130047218 A KR20130047218 A KR 20130047218A KR 20140128605 A KR20140128605 A KR 20140128605A
Authority
KR
South Korea
Prior art keywords
state metric
bit
sliding window
map
log likelihood
Prior art date
Application number
KR1020130047218A
Other languages
Korean (ko)
Inventor
채수창
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020130047218A priority Critical patent/KR20140128605A/en
Publication of KR20140128605A publication Critical patent/KR20140128605A/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2948Iterative decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6561Parallelized implementations

Abstract

A parallel computation turbo code decoding method and apparatus therefor are disclosed. The parallel computation turbo code decoding method divides a code block into a plurality of sliding windows, divides the sliding window into a predetermined bit interval, calculates the soft window by calculating the sliding window in units of the divided bit intervals, Setting a state metric operation execution value of the soft decision step as an initial value of a state metric operator, decoding the sliding windows in parallel by repeating the sliding window by a predetermined number of times to efficiently use the hardware, No training period is required and processing delays can be reduced.

Description

TECHNICAL FIELD [0001] The present invention relates to a parallel computation turbo code decoding method and apparatus,

The present invention relates to a turbo decoding method and apparatus, and more particularly, to a parallel computation turbo code decoding method and apparatus for high speed computation.

Due to the characteristics of the wireless channel, there is a high probability of error occurrence and loss of information due to various causes such as fading, interference, and noise that may occur on the channel when data is transmitted. Therefore, the channel coding technique used in the environment of the wireless channel requires high performance in order to protect the information transmitted from the distortion signal.

Turbo codes have two relatively simple constituent encoders in which convolutional codes are arranged in parallel concatenation and interleavers having a large frame size, so that they are close to the limits of Shannon Has excellent error correction capability. However, in spite of this excellent performance, turbo codes have difficulty in real-time processing due to increase in complexity due to a large amount of computation and very long decoding delay.

A mobile communication system using a turbo code as a channel code is gradually evolving into a form capable of providing high speed data communication while occupying a wide frequency band. This type of providing high-speed data service is performed through various standardization works through various wireless communication standard organizations such as IEEE, 3GPP, and 3GPP2. In 3GPP, the LTE (Long Term Evolution) Is the stage of enactment. Accordingly, a receiver needs a channel decoder that operates at high speed to realize high-speed data service. However, there is a disadvantage that it is difficult to realistically realize a high-speed decoding implementation of a turbo code of LTE through a single decoder.

It is an object of the present invention to overcome the above-mentioned disadvantages by providing a parallel computation turbo code decoding method capable of rapidly performing turbo code decoding in parallel.

Another object of the present invention is to provide a parallel operation turbo code decoding apparatus which performs the above method.

According to an aspect of the present invention, there is provided a method of decoding parallel code turbo codes, the method comprising: (a) dividing a code block into a plurality of sliding windows; (C) performing a forward state metric operation, a backward state metric operation, and a log likelihood ratio operation on the sliding window in units of a plurality of bit sections, , The state metric calculation result determined in the step (c) is set as the initial value of the state metric calculation of the next step, and the step (c) is repeated a predetermined number of times.

The turbo code decoding method may be characterized in that the plurality of sliding windows are decoded in parallel.

In the step (c), a forward state metric operation is performed in a forward direction in a first bit period of the divided bit period for a first time period, and a second state bit, which is not the first bit period, And performing a log likelihood ratio operation on the bit period in which the reverse state metric operation is performed at the first time during the second time period.

In the step (c), the first bit interval and the second bit interval are determined to avoid a memory access collision when using a QPP (quadratic permutation polynomial) interleaver.

In the step (c), the result of the forward state metric calculation, the result of the backward state metric, and the result of the log likelihood ratio calculation are stored in units of a plurality of bit intervals, and a state metric calculation result Is discarded.

Wherein the turbo code decoding method comprises the steps of: calculating, in a first sliding window, a forward state metric operation value of a bit interval adjacent to another second sliding window as an initial value of a forward state metric operation of the second sliding window; And a reverse state metric calculation value of a bit interval adjacent to the third sliding window is used as an initial value of the reverse state metric calculation of the third sliding window.

Here, the turbo code decoding method may further include a step of calculating a forward state metric operation result of the last calculated bit interval in the sliding window as an initial value of a forward state state metric operation of a next step and a reverse state state metric operation result as an initial value of a reverse state metric operation Is set to an initial value.

Here, the turbo code decoding method may be characterized in that the calculated log likelihood ratio values of the sliding window are stored in the interleaving sequence once and then stored in the deinterleaving sequence and input to the decoding process of the next stage.

According to another aspect of the present invention, there is provided a parallel computational turbo code decoding apparatus including a code block divider that divides a code block into a number of MAP operators, a code block divider that receives the divided code blocks, At least two MAP operators for performing a soft decision, setting a soft decision result as an initial value and repeating a predetermined number of times, a QPP (QPP) for performing interleaving / deinterleaving of data input to the MAP operator, a quadratic permutation polynomial interleaver / deinterleaver generator, and a hard decision unit for determining and decoding hard decision results of the MAP operator.

The MAP calculator may include a forward state metric calculator, a reverse state metric calculator, a branch metric calculator, a log likelihood ratio calculator, and a computation result storage memory.

Wherein the MAP operator divides the divided code block into a plurality of bit sections and performs a forward state metric operation, a reverse state metric operation, and a log likelihood ratio operation to store the operation result in the operation result storage memory .

Here, the MAP operator performs a forward state metric operation in a forward direction in a first bit interval of the divided bit interval for a first time, and performs a forward state metric operation in a second bit interval that is not the first bit interval And performs a log likelihood ratio operation on the bit period in which the reverse state metric operation is performed at the first time during the second time period.

Here, the MAP operator determines that the first bit interval and the second bit interval avoid a memory access collision when using the QPP interleaver.

The turbo code decoding apparatus may further include a forward state metric arithmetic operation unit for performing a forward state metric arithmetic operation of a bit region adjacent to the second sliding window, which is another divided code block, in the first sliding window, which is the divided code block, Wherein the first sliding window is used as an initial value of the forward state metric operation of the sliding window, and in the first sliding window, the reverse state metric operation value of the bit interval adjacent to the third sliding window, which is another divided code block, And is used as an initial value of the metric operation.

Here, the operation result storage memory may store a forward state metric calculation result, a backward state metric calculation result, and a log likelihood ratio calculation result in a plurality of bit segments.

Wherein the operation result storage memory discards the result of the forward state metric operation and the backward state metric operation result of the bit interval in which the log likelihood ratio operation is performed when the log likelihood ratio operation of the bit interval is performed. have.

Herein, the QPP interleaver / deinterleaver generator is characterized by being connected to a parallel computing device and performing time division and interleaving or deinterleaving.

According to the parallel computation turbo code decoding method and apparatus as described above, the use of hardware is efficient and the training delay for turbo code decoding is not separately required, thereby reducing the processing delay.

1 is a block diagram for explaining a turbo encoder.
2 is a block diagram for explaining a turbo decoder.
3 is a conceptual diagram for explaining a basic MAP calculation method.
4 is a conceptual diagram for explaining a dual flow MAP computation method.
5 is a conceptual diagram for explaining an improved dual flow MAP computation method.
6 is a conceptual diagram for explaining a parallel-to-parallel window MAP computation method.
FIG. 7 is a conceptual diagram illustrating a pull-up free parallel window MAP computing method.
8 is a conceptual diagram illustrating a parallel computation turbo code decoding method according to an embodiment of the present invention.
9 is an exemplary diagram for explaining a parallel computation turbo code decoding method according to an embodiment of the present invention.
10 is a flowchart of a parallel computation turbo code decoding method according to an embodiment of the present invention.
11 is a flowchart of a soft decision method of each MAP operator in the parallel computation turbo code decoding method according to an embodiment of the present invention.
12 is a block diagram of a parallel computation turbo code decoding apparatus according to another embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail.

It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be interpreted as having a meaning consistent with the meaning in the context of the relevant art and are to be interpreted in an ideal or overly formal sense unless explicitly defined in the present application Do not.

1 is a block diagram for explaining a turbo encoder.

The turbo encoder uses two convolutional encoders, the first encoder 10 uses an input bit

Figure pat00001
Into an input bit sequence,
Figure pat00002
And the second encoder 20 outputs the input bit < RTI ID = 0.0 >
Figure pat00003
Is passed through an interleaver 30
Figure pat00004
And outputs the coded bits
Figure pat00005
.

Therefore, the input bit of the turbo encoder

Figure pat00006
The output of
Figure pat00007
.

Meanwhile, the interleaver used in FIG. 1 is a quadratic permutation polynomial interleaver (QPP interleaver), so that memory contention does not occur, and thus turbo codes can be performed in parallel on an LTE basis.

2 is a block diagram for explaining a turbo decoder.

Referring to FIG. 2, the turbo decoder includes two MAP decoders 40 and 50, a QPP interleaver 60, a QPP deinterleaver 70, a hard disk controller 80, ). The bit string output from the demodulator

Figure pat00008
Is input to the first MAP decoder 40, and the output from the demodulator
Figure pat00009
,
Figure pat00010
Is passed through the interleaver 60
Figure pat00011
And the output of the first MAP decoder 40 are interleaved
Figure pat00012
Is input to the second MAP decoder (50). The output of the second MAP decoder 50 is de-interleaved using a deinterleaver 70
Figure pat00013
Is input to the first MAP decoder (50).

And decodes the soft decision result repeatedly decoded using the hard decision unit 80. [

From here

Figure pat00014
Output from the demodulator,
Figure pat00015
≪ / RTI >
Figure pat00016
Is a code bit output from the demodulator,
Figure pat00017
≪ / RTI >
Figure pat00018
Is a code bit output from the demodulator,
Figure pat00019
≪ / RTI >

Also,

Figure pat00020
Is an output bit interleaved with the soft decision result in the MAP decoder,
Figure pat00021
Is a bit obtained by deinterleaving the soft decision result in the MAP decoder.

Therefore, the turbo decoder repeats the decoding process by adding the result of the decoder output to the decoder input using the MAP decoder, and the error correction rate of the decoded bit is gradually improved. This is the same as the configuration principle of the turbo engine. In general, assuming that the first and second decoding processes are one cycle, the error correction rate can be made perfect by repeating a total of eight times or more.

The MAP decoder of the turbo decoder calculates a log likelihood ratio (LLR) of a corresponding bit by obtaining a forward state metric (FSM), a backward state metric (BSM), and a branch metric, And compares the sign of the log likelihood ratio value to obtain a decryption bit with a hard decision value.

3 is a conceptual diagram for explaining a basic MAP calculation method.

Referring to FIG. 3, in a basic MAP calculation method, a forward state metric is calculated (S10), a value is stored in a memory, and a reverse state metric is calculated (S11).

The backward state metric operation result and the forward state metric result stored in the memory are read after the backward state metric operation to calculate the log likelihood ratio result (S12).

3, since the forward state metric and the reverse state metric can share the operators, they have a complexity of one state metric calculator (SMC) and one log likelihood ratio operator, and the processing time is It takes twice as long as the bit length L. [

4 is a conceptual diagram for explaining a dual flow MAP computation method.

In FIG. 4, the double flow MAP computation method simultaneously computes the forward state metric and the reverse state metric (S20, S21).

Therefore, the log likelihood ratio value can be computed from a point at which the bit length is half, and two bits can be computed simultaneously (S22).

Therefore, the implementation of the dual-flow MAP operation method uses two forward state metric operators, a reverse state metric operator, and a log likelihood ratio operator, and the processing time is required to be L bits long.

Here, in order for the QPP interleaver to have no memory contention, the log-likelihood ratio output of each parallel decoding module must have the same offset value. However, since the output of the log likelihood ratio has a different offset, the dual flow MAP operation method causes a memory contention even if the QPP interleaver is used.

5 is a conceptual diagram for explaining an improved dual flow MAP computation method.

FIG. 5 shows a dual flow MAP computation method for avoiding memory contention, and first calculates a reverse state metric (S30).

The forward state metric operation is started from the bit index L? 2 at the time index L? 2 (S31), and the log likelihood ratio operation is performed simultaneously from the bit index L? 2 (S32).

The forward state metric operation is performed from the bit index 0 from the time index L (S33), and the log likelihood ratio operation is performed simultaneously from the bit index 0 (S34).

In the implementation of the improved dual flow MAP method, the decoding time is further increased by half the bit length, but the log likelihood ratio operator is reduced to one and the size of the memory storing the result of the state metric operation is also reduced to L / 2 .

6 is a conceptual diagram for explaining a parallel-to-parallel window MAP computation method.

The warm-up parallel window MAP computation method divides the slide window (SW) section into W bits without performing decoding on the entire decoded bit length, thereby reducing the memory usage required for the computation.

Referring to FIG. 6, there is a warm-up parallel window method in which there are two slide windows and a training section time L. FIG.

First, a forward state metric operation is performed to the time index L (S40), and a forward state metric operation is performed after the time index L (S41).

A reverse state metric operation is performed in units of W ÷ 4 bits from the time index L + W ÷ 4 (S42), and a log likelihood ratio operation of the bit period in which the reverse state metric operation is performed is performed simultaneously (S43).

6, the memory used for the state metric and log-likelihood ratio operations is reduced to L, and the processing time is reduced to the total number of the slide window bits (W). However, in the implementation of the training window The time required to add the time is W + 2 × L. In the slide window method in which such a work area exists, all the initial values when performing the state metric operation of the repeated warm up section use 0s. Therefore, a sufficient warm-up period is required, and after the warm-up period has passed, a reliable state metric value is obtained, and correct error correction becomes possible. Generally, the longer the warm-up length is, the greater the reliability of the state metric value, and the length of the warm-up period is also related to the performance of decoding.

FIG. 7 is a conceptual diagram illustrating a pull-up free parallel window MAP computing method.

The warm-up-free parallel MAP MAP computing type decoder performs a MAP decoding process for the first encoder and a MAP decoding process for the second encoder, and repeats this process to perform turbo code decoding . If the result of the first MAP decoder is 0.5 times for the iterative decoding of the turbo code decoding process and the result of the second MAP decoder is 1 time for the iterative decoding of the turbo code decoding process, if the iterative decoding number is 8, In all first through eighth MAP decoding processes, the difference in decoding performance is reduced by using the value of the previous MAP decoding operation result as an initial value instead of the training interval.

Referring to FIG. 7, the warm-up period is omitted in the slide window section and the slide window is configured as a parallel window using this feature.

The forward state metric calculator is initialized with the previous iteration result of the sliding window to calculate the forward state metric (S50).

Here, the initial value is set to 0 at the time of the first execution without the previous iteration result.

The backward state metric calculator is initialized with the previous iteration result of the sliding window to calculate the backward state metric (S51).

Also, here, the initial value is set to 0 at the initial execution without previous repetition.

The forward state metric is computed using the forward state metric computed value of the adjacent sliding window (S52).

The backward state metric is computed using the backward state metric computed value of the adjacent sliding window (S53).

When the forward state metric operation of the last bit interval is completed in the sliding window, the forward state metric operation result is stored and used as an initial value in the next iteration decoding (S54).

When the reverse state metric operation of the last bit interval is completed in the sliding window, the reverse state metric operation result is stored and used as an initial value in the next iteration decoding (S55).

The warm-up-free parallel window MAP computation method as shown in FIG. 7 can confirm the shortening of the processing time because there is no need for a training interval, but a memory contention occurs like a double flow method in the calculation output of the log likelihood ratio.

8 is a conceptual diagram illustrating a parallel computation turbo code decoding method according to an embodiment of the present invention.

Referring to FIG. 8, a method for decoding a parallel computation turbo code according to an exemplary embodiment of the present invention decodes the data in units of W bits in two slide windows. In the decoding operation, one forward state metric arithmetic unit per one slide window, A state metric operator, and a log likelihood ratio operator. The forward state metric operation, the backward state metric operation, and the log likelihood ratio operation are performed in units of W ÷ 4 bits. W to 5 × W ÷ 4 is performed by overlapping the 0 to W ÷ 4 bit intervals of the next iterative decoding process.

Therefore, the processing time used in the 8-time iterative decoding is 16 × W + W ÷ 4.

9 is an exemplary diagram for explaining a parallel computation turbo code decoding method according to an embodiment of the present invention.

In an embodiment of the present invention shown in FIG. 9, the parallel computational turbo code decoding method is a modified Warm-up-free parallel MAP (MAP) based on a QPP interleaver, However, the present invention is not limited to the number of windows, and an N-th iterative decoding process will be described as an example.

For example, in the LTE specification, up to 64 QPP interleavers can be connected to parallelized windows.

Here, the improved parallel window MAP method based on the QPP interleaver uses a feature of the QPP interleaver to configure the memory access address to occur in the forward direction in the log-likelihood ratio operation so that the memory contention does not occur, and only one log- A sliding window, which is a window formed of W bits, is divided into W ÷ 4 bit sections.

In addition, only one forward state metric calculator and one reverse state metric calculator can be used.

Therefore, in the parallel computation turbo code decoding method, one decoding process can be divided into five W ÷ 4 time intervals.

During the first W ÷ 4 time (0 to W ÷ 4), the forward state metric operation is initialized to the forward state metric value stored in the (N-1) th iterative decoding process (A1). Here, the bit period in which the forward state metric operation is performed is 3 × W ÷ 4 to W bit period, and the calculation result is stored in the forward state metric calculation result memory.

Also, in the (N-1) th iterative decoding process, the backward state metric value is initialized to the stored backward state metric value (B4). Here, the interval in which the inverse state metric operation is performed is in a range of 0 to W ÷ 4 bits, and the calculation result is stored in the forward state metric calculation result memory.

During the second W ÷ 4 time (W ÷ 4 to W ÷ 2), the forward state metric operation value performed in the adjacent sliding window is used as the initial value of the forward state metric operation, and the forward state metric (A4).

A4 and B4 stored in the reverse state metric calculation result memory are read to calculate the log likelihood ratio of the bit sections 0 to W / 4 (L4).

The calculated result L4 is stored in the fourth logarithmic memory.

In addition, a reverse state metric operation of 3 × W ÷ 4 to W bits is performed using the reverse state metric operation value performed in the adjacent sliding window as an initial value of the reverse state metric operation (B1).

And stores the calculation result of B1 in the reverse state metric calculation result memory.

Here, the memory for storing the forward state metric and the backward state metric operation result is the same memory used during the previous W ÷ 4 time interval, and simultaneously performs reading and writing using the dual port memory.

During the third W ÷ 4 time interval (W ÷ 2 to 3 × W ÷ 4), the state metric memory storing the forward state metric calculation result A1 and the backward state metric calculation result B1 is read and the log of the bit interval 3 × W ÷ 4 to W And calculates the likelihood ratio (L1).

Here, the log likelihood ratio L1, which is the calculation result, is stored in the first log likelihood ratio memory.

Further, a forward state metric operation of the bit sections W 4 to W 2 is performed (A 3), and the result A 3 is stored in the forward state metric operation result memory.

Further, the backward state metric of the bit interval 3 × W ÷ 4 to W ÷ 2 is calculated (B2), and the result B2 is stored in the backward state metric calculation result memory.

During the fourth W ÷ 4 time interval (3 × W ÷ 4 to W), a forward state metric operation is performed on the bit sections W ÷ 2 to 3 × W ÷ 4 (A2), and the result A2 and the backward state metric operation result memory And the log likelihood ratio is calculated by reading the reverse state metric calculation result B2 of W ÷ 2 to 3 × W ÷ 4 (L2).

Here, the log likelihood ratio calculation result L2 is stored in the second log-likelihood ratio memory.

Further, a reverse state metric operation of the bit sections W 2 to W 4 is performed (B 3), and the result B 3 is stored in the reverse state metric operation result memory.

During the fifth W ÷ 4 time (W to 5 × W ÷ 4), the forward state metric calculation result A3 and the reverse state metric calculation result B3 are read from the stored memory and the log likelihood of the bit interval of the bit period W ÷ 4 to W ÷ 2 The ratio is calculated (L3).

Here, the log likelihood ratio calculation result is stored in the third log likelihood ratio memory.

This section is a section in which the (N + 1) th iterative decoding starts, and the forward state metric operation of the bit interval 3 × W ÷ 4 to W during the first W ÷ 4 time in the (N + 1) Lt; RTI ID = 0.0 > metric < / RTI >

The above process is a decoding process of 0.5 times and is stored in the first log likelihood memory, the second log likelihood memory, the third log likelihood non-memory, and the fourth log likelihood non-memory in which the result is stored in the interleaving order, The value actually stores only the surplus information.

Here, the surplus information means a value obtained by subtracting the log likelihood ratio calculation result value from the MAP decoder input value.

In the next decoding step, the redundant information is added to the input of the decoder. The redundant information is input to the decoder after the values of the bit string input in the interleaving order are added to the redundant information. The result of the decoding process is stored in the first log likelihood memory, the second log likelihood non-memory, the third log non-memory, and the fourth log likelihood non-memory in descending order of deinterleaving, Only the surplus information is stored as in the process. Repeating this process more than 5 times shows the same error correction rate performance as the decoding method with training interval.

10 is a flowchart of a parallel computation turbo code decoding method according to an embodiment of the present invention.

When a parallel operation turbo code decoding apparatus according to an embodiment of the present invention receives a code block requiring turbo code decoding, the code block is divided into sliding windows corresponding to the number of MAP operators in operation S110.

The divided sliding window is input to each MAP operator for decoding, and is divided into a plurality of bit sections in order to reduce the complexity of the hardware resources of the parallel operation turbo code decoder (S120).

Each MAP operator performs a soft decision on a divided bit interval basis (S130).

The forward state metric operation, the reverse state metric operation, and the log likelihood ratio operation are performed using the forward state metric calculator, the reverse state metric calculator, the branch metric calculator, the log likelihood ratio calculator, and the QPP interleaver without using memory contention. The bit interval to be performed can be determined.

For example, the log likelihood ratio calculation can be performed in the order of bit intervals as shown in FIG.

Each MAP operator performs a soft decision of a sliding window, and then performs a soft decision on a sliding window, and then outputs each forward state metric operation result of the soft decision execution step for the next iterative decoding, an initial value of each forward state state metric arithmetic unit, Is set to the initial value of the arithmetic unit (S140).

Each MAP operator sets the state metric calculation result of the soft decision step of one sliding window as the initial value of the next iterative decoding step and performs the soft decision by repeating the preset number of times so that the decoding error rate does not cause performance degradation , And the soft decision result repeatedly performed is determined by hard decision (S150).

11 is a flowchart of a soft decision method of each MAP operator in the parallel computation turbo code decoding method according to an embodiment of the present invention.

11 is a flowchart illustrating a method of the soft decision step of the divided bit period of FIG.

In the sliding window to be decoded by each MAP operator, soft state determination is performed for each of a plurality of bit sections. In operation S131, a forward state metric operation is performed in a first bit section of the divided bit sections during the first time period.

For example, in FIG. 9, a forward state metric operation corresponding to A1 is performed.

In addition, a reverse state metric operation is performed in a second bit interval, which is not the first bit interval, during the first time interval (S132).

Wherein the first bit interval and the second bit interval may determine soft sliding of the sliding window and to avoid memory access collision in the use of the QPP interleaver for the next iterative soft decision step.

In addition, the determination of the first bit interval and the second bit interval may be made to lower the hardware complexity of the forward state metric calculator, the reverse state metric calculator, the log likelihood ratio calculator, and the computation result storage memory used in each MAP computing unit.

For example, a reverse state metric operation corresponding to B4 in FIG. 9 is performed.

Each MAP operator performs a log likelihood ratio operation of a bit interval in which a reverse state metric operation is performed at a first time for a second time (S135).

For example, a log likelihood ratio operation corresponding to L4 is performed in FIG.

12 is a block diagram of a parallel computation turbo code decoding apparatus according to another embodiment of the present invention.

12, the parallel computation turbo code decoding apparatus may include N MAP operators 110, a QPP interleaver / deinterleaver 120, and a hard decision unit 130 of a parallel window.

The configuration 110 of the MAP operator according to an embodiment of the present invention includes a forward state metric calculation, a backward state metric calculation, a branch metric calculation, (log likelihood ratio calculation) is used, and the sliding window is divided into a plurality of bit sections, and the size of the used memory is inversely proportional to the number of the plurality of bit sections.

For example, when the sliding window is composed of W bits, the bit interval may be set to W ÷ 4, and the calculation result storage memory may be configured corresponding to W ÷ 4.

Although the QPP interleaver / deinterleaver 120 is shown separately in FIG. 12 for the sake of understanding of the parallel operation turbo code decoding apparatus, the QPP interleaver / deinterleaver 120 may be configured to operate as an interleaver and a deinterleaver in a time division manner.

In the LTE specification using a turbo decoder, user data is configured in transport block (TB) units. A transport block can be divided into several code blocks (CB), and a code block size is composed of a maximum of 6144 bits. Therefore, the turbo decoder must minimize the time it takes to process a transport block that contains one or more code blocks with a code block size of 6144. To do this, it is essential to minimize the time it takes to process one code block. Therefore, in the LTE standard, a QPP interleaver is required to be used so that up to 64 MAP decoders can be parallelized.

Therefore, if 64 windows are formed by using the parallel computation turbo code decoding method and apparatus according to an embodiment of the present invention, 6144 code blocks can be divided into 64 blocks at maximum.

The MAP decoder of the turbo decoder is divided into 64 parallel window sections to constitute the window MAP and each window MAP can perform the decoding operation in parallel. Therefore, the turbo code decoding processing time can be reduced in proportion to the number of window MAPs. The total processing time is {(W × number of iterative decoding + W ÷ 4) × number of clock cycles × clock cycle}, so that the decoding window is repeated by repeating the sliding window divided into W bits to satisfy a certain level of decoding error rate. to be.

Assuming 8 iterative decoding in the LTE system using QPP interleaver, the total processing time required for MAP decoding is W × 16 + W ÷ 4 because two results of MAP decoder are calculated once. Therefore, when W = 96 divided into 64 code blocks, 96 × 16 + 96 ÷ 4 = 1560 clock cycles are required. Thus, if a 100 MHz operating clock frequency is used, the processing time is 1560 x 10 nsec = 15.6 usec.

For example, the hardware resources required for the 64 parallel window MAP computation methods are as follows. The size of the memory for storing the state metric (SMC) values is 1 byte × 96 (window bit size) / 4 ( The number of divided bit sections) x 64 (the number of windows) = 1536 bytes and the LLR has 1 byte x 96 (window bit size) x 64 (number of windows) = 6144 bytes and a branch metric arithmetic unit, a forward state metric arithmetic unit, A 64-ary operator, a log likelihood ratio operator, and a QPP interleaver and a de-interleaver generator are used.

It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. It will be possible.

10, 20: Convolutional encoder
30: QPP interleaver
40, 50: MAP decoder
60: QPP interleaver
70: QPP deinterleaver
80: Hardboard Regular
110: MAP decoder
120: QPP interleaver / deinterleaver
130: Fixing plate

Claims (17)

(A) dividing a code block into a plurality of sliding windows;
(B) dividing the sliding window into a plurality of bit intervals; And
(C) performing a forward state metric operation, a backward state metric operation, and a log likelihood ratio operation on the sliding window in units of a plurality of bit sections to determine a coded bit in the unit of the bit interval,
Wherein the state metric calculation result determined in the step (c) is set as an initial value of the state metric calculation in the next step, and the step (c) is repeated a predetermined number of times.
In claim 1,
In the turbo code decoding method,
And said plurality of sliding windows are decoded in parallel.
In claim 1,
The step (c)
Performing a forward state metric operation in a forward direction in a first bit interval of the divided bit interval during a first time period and performing a reverse state metric operation in a reverse direction in a second bit interval, ; And
And performing a log likelihood ratio operation on a bit interval in which the inverse state metric operation is performed at the first time during a second time period.
The method of claim 3,
The step (c)
Wherein the first bit interval and the second bit interval are determined so as to avoid a memory access collision when using a quadratic permutation polynomial (QPP) interleaver.
In claim 3,
The step (c)
Wherein the backward state metric calculation result, the backward state metric calculation result, the log likelihood ratio calculation result is stored in units of a plurality of bit intervals, and the state metric calculation result of the bit interval in which the log likelihood ratio is calculated is discarded. Code decoding method.
In claim 1,
In the turbo code decoding method,
In a first sliding window, a forward state metric operation value of a bit interval adjacent to another second sliding window is set as an initial value of a forward state metric operation of the second sliding window, and in the first sliding window, And the reverse state metric computation value of the adjacent bit interval is used as an initial value of the reverse state metric computation of the third sliding window.
In claim 1,
In the turbo code decoding method,
The forward state metric calculation result of the last calculated bit interval in the sliding window is set as the initial value of the forward state metric calculation of the next step and the reverse state metric calculation result is set as the initial value of the backward state metric calculation of the next step / RTI >
In claim 1,
In the turbo code decoding method,
Wherein the calculated log likelihood ratio values of the sliding window are stored in an interleaving sequence once and are stored in a deinterleaving sequence once and are input to a decoding process of the next step.
A code block divider for dividing a code block into a number of MAP operators;
At least two MAP operators for receiving a code block divided from the code block divider to perform a soft decision, setting a soft decision result to an initial value, and repeating a predetermined number of times;
A quadratic permutation polynomial (QPP) interleaver / deinterleaver generator connected to the MAP operator to perform interleaving / deinterleaving of data input to the MAP operator; And
And a hard decision unit for determining and decoding hard decision results of the MAP computing unit.
In claim 9,
The MAP calculator includes:
A forward state metric calculator, a backward state metric calculator, a branch metric calculator, a log likelihood ratio calculator, and an operation result storage memory.
In claim 10,
The MAP calculator includes:
Wherein the dividing unit divides the divided code block into a plurality of bit sections and performs a forward state metric calculation, a reverse state metric calculation, and a log likelihood ratio calculation to store the calculation result in the calculation result storage memory.
In claim 11,
The MAP calculator includes:
Performing a forward state metric operation in a forward direction in a first bit interval of the divided bit interval during a first time period and performing a reverse state metric operation in a reverse direction in a second bit interval, And performs a log likelihood ratio operation on the bit period in which the reverse state metric operation is performed at the first time during the second time period.
In claim 12,
The MAP calculator includes:
Wherein the first bit interval and the second bit interval are determined to avoid a memory access collision when using the QPP interleaver.
In claim 13,
The turbo code decoding apparatus comprises:
Wherein a forward state metric operation value of a bit interval adjacent to a second sliding window that is another divided code block in the first sliding window that is the divided code block is used as an initial value of the forward state metric operation of the second sliding window, Wherein in the first sliding window, a reverse state metric operation value of a bit interval adjacent to a third sliding window, which is another divided code block, is used as an initial value of an inverse state metric operation of the third sliding window. Decoding device.
In claim 10,
Wherein the operation result storage memory stores the result of the forward state metric operation, the result of the backward state metric operation, and the result of the log likelihood ratio operation in units of a plurality of bit segments.
In claim 15,
The calculation result storage memory stores,
Wherein when the log likelihood ratio operation of the bit period is performed, the result of the forward state metric operation and the backward state metric operation result of the bit period in which the log likelihood ratio operation is performed is discarded.
In claim 9,
Wherein the QPP interleaver / deinterleaver generator is connected to the parallel computing unit to perform time division and interleaving or deinterleaving.
KR1020130047218A 2013-04-29 2013-04-29 Method of parallel calculation for turbo decoder and apparatus for performing the same KR20140128605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130047218A KR20140128605A (en) 2013-04-29 2013-04-29 Method of parallel calculation for turbo decoder and apparatus for performing the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130047218A KR20140128605A (en) 2013-04-29 2013-04-29 Method of parallel calculation for turbo decoder and apparatus for performing the same

Publications (1)

Publication Number Publication Date
KR20140128605A true KR20140128605A (en) 2014-11-06

Family

ID=52454445

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130047218A KR20140128605A (en) 2013-04-29 2013-04-29 Method of parallel calculation for turbo decoder and apparatus for performing the same

Country Status (1)

Country Link
KR (1) KR20140128605A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210128217A (en) 2020-04-16 2021-10-26 한국전력공사 TURBO DECODING APPARATUS and TURBO CODE COMMUNICATION METHOD COSIDERING QUANTIZED CHANNEL

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210128217A (en) 2020-04-16 2021-10-26 한국전력공사 TURBO DECODING APPARATUS and TURBO CODE COMMUNICATION METHOD COSIDERING QUANTIZED CHANNEL

Similar Documents

Publication Publication Date Title
KR101323444B1 (en) Iterative decoder
US7549113B2 (en) Turbo decoder, turbo decoding method, and operating program of same
JP2006115145A (en) Decoding device and decoding method
US7530011B2 (en) Turbo decoding method and turbo decoding apparatus
US9048877B2 (en) Turbo code parallel interleaver and parallel interleaving method thereof
JP2004531116A (en) Interleaver for turbo decoder
JP4227481B2 (en) Decoding device and decoding method
US8035537B2 (en) Methods and apparatus for programmable decoding of a plurality of code types
US20090172495A1 (en) Methods and Apparatuses for Parallel Decoding and Data Processing of Turbo Codes
RU2571597C2 (en) Turbocode decoding method and device
EP1471677A1 (en) Method of blindly detecting a transport format of an incident convolutional encoded signal, and corresponding convolutional code decoder
KR100628201B1 (en) Method for Turbo Decoding
GB2403106A (en) a turbo type decoder which performs decoding iterations on sub-blocks to improve convergence
KR19990081470A (en) Method of terminating iterative decoding of turbo decoder and its decoder
US10084486B1 (en) High speed turbo decoder
JP5169771B2 (en) Decoder and decoding method
KR20140128605A (en) Method of parallel calculation for turbo decoder and apparatus for performing the same
US9130728B2 (en) Reduced contention storage for channel coding
KR100297739B1 (en) Turbo codes with multiple tails and their encoding / decoding methods and encoders / decoders using them
EP1587218B1 (en) Data receiving method and apparatus
US20180123616A1 (en) Decoding method for convolutional code decoding device in communication system and associated determination module
KR100355452B1 (en) Turbo decoder using MAP algorithm
Raymond et al. Design and VLSI implementation of a high throughput turbo decoder
US20160204803A1 (en) Decoding method for convolutionally coded signal
KR100627723B1 (en) Parallel decoding method for turbo decoding and turbo decoder using the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination