CN113992213A - Double-path parallel decoding storage equipment and method - Google Patents

Double-path parallel decoding storage equipment and method Download PDF

Info

Publication number
CN113992213A
CN113992213A CN202111261506.4A CN202111261506A CN113992213A CN 113992213 A CN113992213 A CN 113992213A CN 202111261506 A CN202111261506 A CN 202111261506A CN 113992213 A CN113992213 A CN 113992213A
Authority
CN
China
Prior art keywords
decoding
block
storage
information
way
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111261506.4A
Other languages
Chinese (zh)
Inventor
杨汝燕
陈守金
邹刚
黄海莲
刘波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinglian Xintong Technology Co ltd
Original Assignee
Chengdu Xinglian Xintong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Xinglian Xintong Technology Co ltd filed Critical Chengdu Xinglian Xintong Technology Co ltd
Priority to CN202111261506.4A priority Critical patent/CN113992213A/en
Publication of CN113992213A publication Critical patent/CN113992213A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding

Abstract

The embodiment of the application provides a double-channel parallel decoding storage device and a double-channel parallel decoding storage method, and relates to the field of communication technology.

Description

Double-path parallel decoding storage equipment and method
Technical Field
The present application relates to the field of communications technologies, and in particular, to a two-way parallel decoding storage apparatus and method.
Background
In the second generation digital satellite video broadcasting protocol (DVB-RCS2), turbo codes, which are the selected coding codes, exhibit superior performance on short codes. With the rapid development of modern communication technologies, the channel bandwidth is becoming higher and higher. Therefore, efficient decoding algorithms and implementation architectures become research hotspots in the communication field. The turbo coding and decoding adopts a mode of combining feedback and iteration, and meanwhile, in the decoding process, the intermediate prior probability information needs to be stored, and the intermediate result of the parallel decoding needs to be interleaved and stored for access, the related information of the segmented parallel decoding needs to be updated, and the like, so that a large amount of storage access is involved.
At present, in order to increase the bandwidth of turbo codes, parallel algorithm-based decoding structures are proposed successively, but if such decoding structures are implemented by using a full parallel algorithm, a large amount of storage resources and logic resources are required.
Disclosure of Invention
The present application aims to provide a two-way parallel decoding storage device and method, for example, which can improve the problem that the existing storage scheme provided for parallel decoding of turbo codes requires a large amount of storage resources and logic resources, which results in large resource consumption.
The embodiment of the application can be realized as follows:
in a first aspect, the present application provides a two-way parallel decoding storage device, which adopts the following technical solution.
A double-path parallel decoding storage device comprises an external input storage block, a core decoding module, a first prior storage block, a second prior storage block and a decoding information storage block;
the external input storage block comprises a first storage block and a second storage block and is used for dividing received prior information to obtain a path decoding block and a two-path decoding block, storing the path decoding block in the first storage block in a preset storage mode and storing the two-path decoding block in the second storage block in the preset storage mode; the preset storage mode comprises sequential storage and storage after interleaving;
the core decoding module is configured to calculate, by using a decoding algorithm, respective decoding results of the one-way decoding block and the two-way decoding block according to the one-way decoding block and the two-way decoding block read in parallel from the first storage block and the second storage block and the one-way prior probability information and the two-way prior probability information of the last decoding read from the first prior storage block and the second prior storage block by using a sliding window method when performing iterative decoding on the one-way decoding block and the two-way decoding block, where the decoding results include a posterior probability information of each decoding block, and the iterative decoding includes multiple decoding;
the first prior storage block is used for reading the posterior probability information of the one path of decoding block in the core decoding module, and calculating the one path of decoding block and the posterior probability information of the one path of decoding block according to an interleaving/de-interleaving algorithm to obtain and store the one path of prior probability information of the current decoding;
the second prior storage block is used for reading the posterior probability information and the prior information of the two-way decoding block in the core decoding module, and calculating the posterior probability information of the two-way decoding block and the two-way decoding block according to an interleaving/de-interleaving algorithm to obtain and store the two-way prior probability information of the current decoding;
the decoding information storage block is used for reading the posterior probability information in the core decoding module, judging the posterior probability information to obtain information bits, and storing the information bits according to a de-interleaving address.
In one possible implementation, the decoding algorithm includes a max-log-map algorithm, the core decoding module includes a first decoding unit and a second decoding unit, and the one-way decoding block and the two-way decoding block each include a plurality of decoding segments;
the first decoding unit is configured to, during each decoding, sequentially read each decoding segment in the first storage block and one path of prior probability information of a last decoding stored in the first prior storage block, so as to obtain posterior probability information of each decoding segment by using a decoding algorithm;
the second decoding unit is configured to, during each decoding, sequentially read each decoding segment in the second storage block and two paths of prior probability information of the last decoding stored in the second prior storage block, so as to obtain the posterior probability information of each decoding segment by using a decoding algorithm.
In one possible embodiment, the two-way parallel decoding storage device further comprises a first cache block and a second cache block;
the first decoding unit is used for obtaining posterior probability information of each decoding segment through the following steps:
reading one path of prior probability information of the last decoding stored in the first prior storage block and the decoding segment stored in the first storage block, and obtaining the branch metric of each decoding segment in the one path of decoding block by adopting a branch metric calculation formula:
obtaining backward branch metrics of each of the decoding segments by using a backward branch metric calculation formula according to the branch metrics in advance by a sliding window, and writing the backward branch metrics into the first cache block;
according to the branch measurement of each decoding section, adopting a forward branch measurement calculation formula to obtain the forward branch measurement of each decoding section;
sequentially obtaining a path of posterior probability information of each decoding section in the path of decoding group according to the forward branch metric and the backward branch metric of each decoding section;
the second decoding unit is used for obtaining posterior probability information of each decoding segment through the following steps:
reading two-way prior probability information of the last decoding stored in the second prior storage block and the decoding segments stored in the second storage block, and obtaining the branch metric of each decoding segment in the two-way decoding group by adopting a branch metric calculation formula:
obtaining backward branch metrics of each of the decoding segments by using a backward branch metric calculation formula according to the branch metrics in advance by a sliding window, and writing the backward branch metrics into the second cache block;
according to the branch measurement of each decoding section, adopting a forward branch measurement calculation formula to obtain the forward branch measurement of each decoding section;
and sequentially obtaining two-path posterior probability information of each decoding section in the two-path decoding group according to the forward branch metric and the backward branch metric of each decoding section.
In a possible implementation manner, the sizes of the storage spaces of the first cache block and the second cache block are both the sum of the maximum value of the backward branch metric and a preset guard interval size, and both read and write operations are performed in a ring read-write manner.
In a possible implementation manner, the two-way parallel decoding storage device further comprises a tail-biting information storage block, and the tail-biting information storage block adopts 4-time serialization for data storage;
the first decoding unit and the second decoding unit are also used for writing the forward branch metric and the backward branch metric of a decoding segment into the tail-biting information storage block as tail-biting information every time the decoding of the decoding segment is completed;
the first decoding unit and the second decoding unit are further configured to read tail-biting information stored in the tail-biting information storage block as a decoding initial value before decoding of each decoding segment, so as to calculate posterior probability information.
In a possible implementation, the tail-biting information includes a first tail-biting and a second tail-biting, where the first tail-biting is the tail-biting information written by the first decoding unit, and the second tail-biting is the tail-biting information written by the second decoding unit;
and when the first tail-biting and the second tail-biting are written when decoding an end decoding segment, the first decoding unit reads the second tail-biting as a decoding initial value when calculating the posterior probability information of a next decoding segment, and the second decoding unit reads the first tail-biting as a decoding initial value when calculating the posterior probability information of the next decoding segment.
In a possible implementation manner, the prior probability information includes an interleaving address and a sequence number, and both the first prior memory block and the second prior memory block are configured to store the sequence number as a memory address and store the interleaving address as a memory content in an inverted manner.
In a feasible implementation manner, the decoding information storage block is configured to, during the previous N times of decoding of each iterative decoding, only read and determine the posterior probability information to obtain information bits, store the information bits, and after the previous N times of decoding of each iterative decoding is completed, output the information bits of the previous decoding, read and determine the posterior probability information of the current decoding to obtain information bits, and store the information bits;
wherein N is a preset natural number.
In one possible implementation, the first and second memory blocks each include an interleaving block and a sequential block;
the sequence block is used for sequentially storing the one-way decoding group or the two-way decoding group;
and the interleaving block is used for interleaving the one-way decoding group or the two-way decoding group and then storing the two-way decoding group.
In a second aspect, the present application provides a two-way parallel decoding storage method, which adopts the following technical solution.
A two-way parallel decoding storage method is realized based on the two-way parallel decoding storage device of the first aspect, and the method comprises the following steps:
receiving prior information to be decoded, dividing the prior information to obtain a first decoding block and a second decoding block, storing the first decoding block in the first storage block according to a preset storage mode, and storing the second decoding block in the second storage block according to a preset storage mode; the preset storage mode comprises sequential storage and storage after interleaving;
when the one-way decoding block and the two-way decoding block are subjected to iterative decoding, a sliding window method is used during each decoding, and according to the one-way decoding block and the two-way decoding block which are read from the first storage block and the second storage block in parallel, and the one-way prior probability information and the two-way prior probability information which are read from the first prior storage block and the second prior storage block and are decoded last time, the decoding results of the one-way decoding block and the two-way decoding block are calculated by adopting a decoding algorithm, wherein the decoding results comprise posterior probability information of each decoding block;
obtaining a path of prior probability information of the one path of decoding block by adopting an interleaving/de-interleaving algorithm according to the posterior probability information of the one path of decoding block and the one path of decoding block, and storing the path of prior probability information in a first prior storage block;
obtaining two-way prior probability information of the two-way decoding groups by adopting an interleaving/de-interleaving algorithm according to the posterior probability information of the two-way decoding groups and the two-way decoding groups, and storing the two-way prior probability information in a second prior storage block;
and judging the posterior probability information to obtain information bits, and storing the information bits in a decoding information storage block according to a de-interleaving address.
The beneficial effects of the embodiment of the application include, for example:
a dual-channel parallel decoding storage device and method are provided, the prior information input from outside is divided into a first channel decoding block and a second channel decoding block by an external input storage block, and then stored to a first storage block and a second storage block, a core decoding module combines the one channel prior probability and the two channel prior probability of the last decoding stored in the first prior storage block and the second prior storage block, and simultaneously decodes the one channel decoding block and the two channel decoding block stored in the first storage block and the second storage block in parallel, and obtains the decoding results of the one channel decoding block and the two channel decoding block, the first prior storage block and the second prior storage block process the posterior probability information in the decoding results respectively, and obtains and stores the one channel prior probability and the two channel prior probability of the current decoding, the decoding information storage block judges the posterior probability information in the decoding results, the information bits are obtained and stored, so that reasonable storage scheduling in double-path parallel decoding is realized, efficient processing of parallel decoding is guaranteed to a certain extent, and the problem of high resource consumption caused by the fact that a large amount of storage resources and logic resources are needed in the existing storage scheme provided for parallel decoding of turbo codes can be solved.
Drawings
In order to more clearly explain the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that for those skilled in the art, other related drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic view; one embodiment provides a block diagram of a two-way parallel decoding storage device.
FIG. 2 is a schematic view; schematic block diagram of prior information partitioning.
FIG. 3 is a schematic view; one embodiment provides a schematic diagram of the operation process of single decoding.
FIG. 4 is a diagram of: in one embodiment, the first decoding unit is a work flow diagram.
FIG. 5 is a diagram: in one embodiment, the second decoding unit is configured to perform a work flow.
FIG. 6 is a diagram of: a cache process map of the first cache block and the second cache block.
Icon: 100-two-way parallel decoding storage device; 110 — external input storage block; 120-a first memory block; 130-a second memory block; 140-core decoding module; 141-a first decoding unit; 142-a second coding unit; 150-a first a priori memory block; 160-a second a priori memory block; 170-decoding the information storage block; 180-a first cache block; 190-a second cache block; 200-tail biting information storage block.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that if the terms "upper", "lower", "inner", "outer", etc. are used to indicate an orientation or positional relationship based on that shown in the drawings or that the application product is usually placed in use, the description is merely for convenience and simplicity, and it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore should not be construed as limiting the present application.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features in the embodiments of the present application may be combined with each other without conflict.
RCS2 is a second generation standard for satellite return links, and the decoding section employs turbo codes consisting of two dual binary cyclic recursive systematic convolutional component encoders, an interleaver and a puncturer. The code of the component code is stored as 4, so there are 16 states.
Due to the development of space-based internet and the background of internet of everything, the required channel bandwidth is wider and wider, so as to meet the passing requirements of people on broadband and large capacity. The research on the decoding algorithm of the turbo of the broadband and the related implementation structure are also the current hot spot, and because the decoding algorithm structure of the turbo code has inherent defects compared with the LDPC code in the implementation of the broadband pass, the decoding bandwidth is difficult to improve. In order to improve the decoding bandwidth of the turbo code, parallel algorithm-based decoding structures, such as a fully parallel decoding algorithm and related implementation, are successively proposed, and it is evaluated that if the decoding of the turbo code is implemented by using a non-parallel algorithm, the consumed logic resources and storage resources are very large.
Also, in the second generation digital satellite video broadcasting protocol (DVB-RCS2), turbo codes, which are selected codecs, exhibit superior performance on short codes. the turbo coding and decoding adopts a mode of combining feedback and iteration, and simultaneously needs to store intermediate prior probability information in the decoding process, and also needs to carry out interleaving storage access on intermediate results of parallel decoding, update related information of segmented parallel decoding and the like, which all relate to a large amount of storage access. At present, the existing storage schemes provided for parallel decoding of turbo codes all need a large amount of storage resources and logic resources, thereby causing the problem of large resource consumption.
Based on the above considerations, the present application provides a two-way parallel decoding storage apparatus and method.
In one embodiment, referring to fig. 1 and 2, a two-way parallel decoding storage device 100 is provided. The two-way parallel decoding storage device 100 comprises an external input storage block 110, a core decoding module 140, a first prior storage block 150, a second prior storage block 160 and a decoding information storage block 170.
The external input storage block 110 includes a first storage block 120 and a second storage block 130, and is configured to divide the received prior information to obtain a first decoding group and a second decoding group, store the first decoding group in the first storage block 120 in a preset storage manner, and store the second decoding group in the second storage block 130 in the preset storage manner.
The preset storage mode comprises sequential storage and storage after interleaving. The received a priori information is the code to be decoded.
Specifically, one-way decoding groups are stored in the first storage block 120 in a predetermined manner, and two-way decoding groups are stored in the second storage block 130 in a predetermined manner.
The core decoding module 140 is configured to calculate, by using a decoding algorithm, respective decoding results of the one-way decoding block and the two-way decoding block when performing iterative decoding on the one-way decoding block and the two-way decoding block, and using a sliding window method during each decoding according to the one-way decoding block and the two-way decoding block read in parallel from the first storage block 120 and the second storage block 130, and the one-way prior probability information and the two-way prior probability information of the last decoding read from the first prior storage block 150 and the second prior storage block 160.
The decoding result comprises the posterior probability information of each decoding group, and the iterative decoding comprises a plurality of times of decoding.
The first priori storage block 150 is configured to read a posteriori probability information of one channel of decoding blocks in the core decoding module 140, and calculate the a posteriori probability information of the one channel of decoding blocks and the one channel of decoding blocks according to an interleaving/de-interleaving algorithm to obtain and store the one channel of a priori probability information of a current decoding.
The second prior storage block 160 is configured to read the posterior probability information and prior information of the two-way decoding block in the core decoding module 140, and calculate the posterior probability information of the two-way decoding block and the two-way decoding block according to an interleaving/de-interleaving algorithm, so as to obtain and store the two-way prior probability information of the current decoding.
The decoding information storage block 170 is configured to read the posterior probability information in the core decoding module 140, determine the posterior probability information to obtain information bits, and store the information bits according to the deinterleaving address.
In the above-mentioned two-channel parallel decoding storage apparatus 100, the prior information inputted from the outside is divided into one channel of decoding block and two channels of decoding block by the external input storage block 110, and then stored into the first storage block 120 and the second storage block 130, the core decoding module 140 combines the one channel of prior probability and two channels of prior probability of the last decoding stored in the first prior storage block 150 and the second prior storage block 160, and performs parallel decoding on the one channel of decoding block and the two channels of decoding block stored in the first storage block 120 and the second storage block 130 at the same time, so as to obtain the decoding results of the one channel of decoding block and the two channels of decoding block, the first prior storage block 150 and the second prior storage block 160 process the posterior probability information in the decoding results, respectively obtain the one channel of prior probability and two channels of the current decoding, and store and determine the posterior probability information in the decoding results by the decoding information storage block 170, the information bits are obtained and stored, so that reasonable storage scheduling in double-path parallel decoding is realized, efficient processing of parallel decoding is guaranteed to a certain extent, and the problem of high resource consumption caused by the fact that a large amount of storage resources and logic resources are needed in the existing storage scheme provided for parallel decoding of turbo codes can be solved.
It should be noted that, with reference to fig. 2, the one-way decoding group and the two-way decoding group may be obtained by dividing the length of the code to be decoded by equal length, that is, the length of the one-way decoding group and the length of the two-way decoding group are equal. For example, if the code length is N, the one-way decoding group is 0 to (N/2-1), and the two-way decoding group is N/2 to N-1.
At each time of iterative decoding, the method is divided into interleaving decoding of the first half and deinterleaving decoding of the second half. The external input information (encoding to be decoded) used in the first half interleaving decoding, i.e., Ar, Br, Wr, Yr, is not required to be interleaved; the external input information (encoding to be decoded) used in the second half interleaving decoding, i.e., Ar, Br, Wr, or Yr, is transmitted to the core decoding module 140 after being interleaved. Thus, with continued reference to fig. 1, the first memory block 120 and the second memory block 130 each include an interleaving block and a sequential block.
The sequential blocks in the first storage block 120 are used for sequentially storing a decoding group.
The interleaving block in the first storage block 120 is used for interleaving one path of decoding group and then storing the interleaved decoding group.
And a sequential block in the second storage block 130 for sequentially storing the two-way decoding groups.
And the interleaving block in the second storage block 130 is used for interleaving the two-way decoding groups and then storing the two-way decoding groups.
The arrangement of the sequence block and the interleaving block is that the information stored by the sequence block in the first half decoding is used for the first half decoding, the information stored by the interleaving block in the second half decoding is used for the second half decoding, and the first half decoding and the second half decoding are stored in sequence, so when the first half decoding and the second half decoding are switched, only the switching between the sequence block and the storage block is needed.
Moreover, since the code to be decoded is stored in the interleaved block after being interleaved when received by the external input memory block 110, when the core decoding module 140 reads the memory contents in the first memory block 120 and the second memory block 130, the information of the response position is sequentially read according to the position requirement of the sliding window after decoding and blocking, and it can be ensured to a certain extent that address collision of the first memory block 120 and the second memory block 130 is not caused during parallel reading.
In practical use, to support the decoding of all the frames of the dvb-rcs2 standard, the storage resource is set to 12 blocks kram (block memory location of 36 Kbits) in the external input storage block 110.
In one embodiment, in decoding turbo codes, the decoding algorithm may be implemented using a MAX-log-MAP algorithm. The MAX-log-MAP algorithm is simplified on the basis of the log-MAP algorithm, logarithm operation in the log-MAP algorithm is converted into maximum value operation, and meanwhile, the loss of the performance of the dual-binary decoding is only 0.1-0.3 dB and is within an acceptable range. And the simplified MAX-log-MAP algorithm is beneficial to the realization of FPGA, and simultaneously keeps the decoding performance of turbo codes on short codes. Based on this, the two-way parallel decoding storage device 100 provided by the application is realized based on decoding turbo codes by adopting MAX-log-MAP algorithm.
The MAX-log-MAP algorithm employed in the turbo code decoding of DVB-RCS2 includes the following formula.
Figure BDA0003325906610000121
Figure BDA0003325906610000122
Figure BDA0003325906610000123
Figure BDA0003325906610000124
Wherein the content of the first and second substances,
Figure BDA0003325906610000125
which represents the forward branch metric(s),
Figure BDA0003325906610000126
a subsequent branch metric is represented that represents the subsequent branch metric,
Figure BDA0003325906610000127
the information on the posterior probability is represented,
Figure BDA0003325906610000128
representing branch metrics, s represents the state of an edge in the code trellis diagram,
Figure BDA0003325906610000129
representing the soft information of the first information bit,
Figure BDA00033259066100001210
representing the second information bits of soft information,
Figure BDA00033259066100001211
the first parity bit soft information is represented,
Figure BDA00033259066100001212
representing the second parity bit soft information,
Figure BDA00033259066100001213
the information indicative of the prior probability is,
Figure BDA00033259066100001214
which represents the first bit of information and,
Figure BDA00033259066100001215
which represents the second bit of information and,
Figure BDA00033259066100001216
which represents the first check bit, is,
Figure BDA00033259066100001217
a second check-up bit is represented,
Figure BDA00033259066100001218
representing posterior probability information.
The first information bit soft information, the second information bit soft information, the first check bit soft information and the second check bit soft information are prior information input from the outside. The sequence of the code word generated by coding after BPSK modulation comprises a first information bit, a second information bit, a first check bit and a second check bit, and the states of the coder at the k-1 moment and the k moment are respectively Sk-1=s',Sk=s。
In the decoding of Turbo codes, iterative decoding is generally adopted, that is, a group of prior information input externally is decoded for multiple times, the prior probability information of the last time is used in each decoding, and under the condition of decoding convergence, the decoding result depends on the iteration of the prior information and finally converges to the decoding probability.
On the basis, referring to fig. 1 and fig. 2, the core decoding module 140 includes a first decoding unit 141 and a second decoding unit 142, and each of the one-way decoding block and the two-way decoding block includes a plurality of decoding segments.
For example, one-way decoding set comprises decoding section 0-decoding section (N/2-1), and two-way decoding set comprises decoding section N/2-decoding section N-1.
The first decoding unit 141 is configured to, during each decoding, sequentially read each decoding segment in the first storage block 120 and one path of prior probability information of the last decoding stored in the first prior storage block 150, so as to obtain the posterior probability information of each decoding segment by using a decoding algorithm.
The second decoding unit 142 is configured to, during each decoding, sequentially read each decoding segment in the second storage block 130 and two paths of prior probability information of the last decoding stored in the second prior storage block 160, so as to obtain the posterior probability information of each decoding segment by using a decoding algorithm.
Through the setting, each decoding section is used as the minimum decoding window to carry out sliding window decoding, and because one path of prior probability information and one path of prior probability information of the last decoding are known, the posterior probability information of each decoding section can be rapidly calculated based on the posterior probability information calculation formula when the window is opened every time, and the decoding result of each decoding group is obtained.
According to the MAX-log-MAP algorithm, when decoding, the backward branch metric calculation is performed from the back to the front of each decoding block, namely, the backward branch metric calculation is in a reverse order, and actually, when the posterior probability information is calculated, the backward branch metric calculation is performed in a sequential order. Therefore, with continued reference to FIG. 1, the two-way parallel decoding storage device 100 further comprises a first cache block 180 and a second cache block 190. On this basis, referring to fig. 3 and 4, the first decoding unit 141 is configured to obtain posterior probability information of each decoding segment through the following steps. In FIG. 3
Figure BDA0003325906610000131
Representing posterior probability information.
S101, reading one path of prior probability information of the last decoding stored in a first prior storage block and a decoding segment stored in the first storage block, and obtaining branch measurement of each decoding segment in one path of decoding block by adopting a branch measurement calculation formula.
S102, a backward branch metric calculation formula is adopted according to the branch metrics in advance by a sliding window mode to obtain the backward branch metrics of each decoding section, and the backward branch metrics are written into the first cache block.
Since the branch metric of the decoding segment 2 needs to be used when calculating the posterior probability information of the decoding segment 1, the manner of windowing in advance refers to windowing the decoding segment K and calculating the branch metric of the decoding segment K by windowing the decoding segment K +1 in advance when calculating the posterior probability information of the decoding segment K, so as to calculate the backward branch metric of the decoding segment K. The decoding section k is any one of the decoding sections 0 to (N/2-1).
S103, according to the branch measurement of each decoding segment, a forward branch measurement calculation formula is adopted to obtain the forward branch measurement of each decoding segment.
S104, according to the forward branch measurement and the backward branch measurement of each decoding segment, one path of posterior probability information of each decoding segment in one path of decoding group is obtained in sequence.
The branch metric, the backward branch metric, the forward branch metric and the posterior probability information are respectively obtained by adopting a branch metric calculation formula, a backward branch metric calculation formula, a forward branch metric calculation formula and a posterior probability information calculation formula in the MAX-log-MAP algorithm.
Similarly, referring to fig. 3 and 5, the second decoding unit 142 is configured to obtain a posteriori probability information of each decoded segment through the following steps.
S201, reading two-path prior probability information of the last decoding stored in the second prior storage block and the decoding segments stored in the second storage block, and obtaining branch measurement of each decoding segment in the two-path decoding block by adopting a branch measurement calculation formula.
S202, a backward branch metric calculation formula is adopted according to the branch metrics in advance by a sliding window mode to obtain the backward branch metrics of each decoding section, and the backward branch metrics are written into a second cache block.
Since the branch metric of the decoding segment 2 needs to be used when calculating the posterior probability information of the decoding segment 1, the manner of windowing in advance refers to windowing the decoding segment K and calculating the branch metric of the decoding segment K by windowing the decoding segment K +1 in advance when calculating the posterior probability information of the decoding segment K, so as to calculate the backward branch metric of the decoding segment K. The decoding section k is any one of the decoding sections N/2-N-1.
S203, according to the branch measurement of each decoding segment, a forward branch measurement calculation formula is adopted to obtain the forward branch measurement of each decoding segment.
S204, according to the forward branch measurement and the backward branch measurement of each decoding section, two-path posterior probability information of each decoding section in the two-path decoding group is obtained in sequence.
Through the steps and the setting, the information of the backward branch metric is calculated in a mode of windowing in advance by one sliding window, and the obtained backward branch metric is cached in the first cache block 180 or the second cache block 190 for subsequent calculation of posterior probability information.
Further, the sizes of the storage spaces of the first cache block 180 and the second cache block 190 are the sum of the maximum value of the backward branch metric and the preset guard interval size, and both adopt a ring read-write mode to perform read-write operation.
Specifically, the maximum amount of space required for backward branch metrics is typically 80, with 8 clock delays for simultaneous read and write, so the guard interval can be set to 16, while the backward branch metrics store 16 data in parallel, with a data bit width of 9. Therefore, the first cache block 180 and the second cache block 190 may be both set to 96 × 144.
And a guard interval is set, so that when the P +1 th data occupies the position with the address being 0, the data at the position can be ensured to be read to a certain extent.
For example, referring to FIG. 6, first cache block 180 or second cache block 190 may be stored at length 96, including locations 0-79 and 0-16, with the backward branch metric of the previous decoded segment stored at locations 0-79 and the backward branch metric of the next decoded segment stored beginning at location 0 of locations 0-15, and the backward branch metric of the previous decoded segment stored at that location having been read when the backward branch metric of the next decoded segment started writing from location 0 of locations 0-79. In this process, since the first cache block 180 and the second cache block 190 are used for distributed storage, 432 LUTs (lookup tables) that need to be consumed for calculation are used.
The MAX-log-MAP algorithm is a decoding algorithm based on the division of a sliding window and a decoding section, so that an initial value of operation is provided for decoding each decoding section through tail biting information in the decoding process. Based on this, please continue to refer to fig. the two-way parallel decoding storage device 100 further includes a tail-biting information storage block 200, and the tail-biting information storage block 200 uses 4-times serialization for data storage.
The first decoding unit 141 and the second decoding unit 142 are further configured to, each time decoding of one decoded segment is completed, write the forward branch metric and the backward branch metric of the decoded segment as tail-biting information into the tail-biting information storage block 200.
The first decoding unit 141 and the second decoding unit 142 are further configured to read tail-biting information stored in the tail-biting information storage block 200 as a decoding initial value before performing decoding of each decoded segment to calculate posterior probability information.
For example, with continued reference to fig. 2, when decoding the decoding segment 1, the backward branch metric and the forward branch metric of the decoding segment 0 need to be used as the stored values for calculating the forward branch metric and the backward branch metric of the decoding segment 1. Therefore, when decoding the decoding section 0, the forward branch metric and the backward branch metric of the decoding section 0 are written into the tail-biting information storage block 200 as tail-biting information, so that when decoding the decoding section 0, the forward branch metric and the backward branch metric in the tail-biting information storage block 200 can be directly read.
The tail-biting information storage block 200 adopts serialization, thereby reducing the consumption of block storage units and improving the resource utilization rate.
Further, the tail-biting information includes a first tail-biting and a second tail-biting, the first tail-biting is the tail-biting information written by the first decoding unit 141, and the second tail-biting is the tail-biting information written by the second decoding unit 142.
Referring to fig. 2, when the first tail-biting and the second tail-biting are written in for decoding the last decoding segment, the first decoding unit 141 reads the second tail-biting as the decoding initial value when performing the posterior probability information calculation of the next decoding segment, and the second decoding unit 142 reads the first tail-biting as the decoding initial value when performing the posterior probability information calculation of the next decoding segment.
Specifically, when the first tail-biting is the tail-biting information written by the first decoding unit 141 when decoding the decoded segment (N/2-1), i.e., the first tail-biting is the forward branch metric and the backward branch metric of the decoded segment (N/2-1), and the second tail-biting is the tail-biting information written by the second decoding unit 142 when decoding the decoded segment N-1, i.e., the second tail-biting is the forward branch metric and the backward branch metric of the decoded segment N-1, then the first decoding unit 141 will decode the decoded segment 0, and the second decoding unit will decode the decoded segment N/2. At this time, the first decoding unit 141 reads the second tail-biting in the tail-biting information storage block 200 as a decoding initial value in the next decoding, and the second decoding unit 142 reads the first tail-biting in the tail-biting information storage block 200 as a decoding initial value in the next decoding. Namely, the forward branch metric and the backward branch metric of the last decoding segment (N/2-1) are the decoding initial values of the next decoding segment N/2, and the forward branch metric and the backward branch metric of the last decoding segment N-1 are the decoding initial values of the next decoding segment 0.
In other positions, the first tail-biting is the decoding initial value of the first decoding unit 141, and the second tail-biting is the decoding initial value of the second decoding unit 142.
Through the arrangement, the stability of parallel decoding can be ensured to a certain extent.
Since the bit width of the tail biting information to be stored is 16x9 equal to 144, and in the case of wide storage, a large amount of resources of the tail biting information storage block 200 are consumed, the tail biting information storage block 200 is serialized by 4 times, and the bit width 144 is converted into 36 bits, thereby reducing the storage resource consumption.
In general, to support the decoding of all frames of the dvb-rcs2 standard, in the tail-biting information storage, the storage resource required by the tail-biting information storage BLOCK 200 is 1 BLOCK RAM, and the tail-biting information storage BLOCK 200 is a 36Kbits storage BLOCK.
Further, the prior probability information includes interleaving addresses and sequence numbers. The interleaving address may be obtained by using a preset interleaving algorithm, or may be obtained by using a conventional interleaving algorithm.
The preset interleaving algorithm may be an algorithm obtained by improving a conventional interleaving algorithm, for example, the algorithm may be an algorithm that obtains an interleaving address by performing simple preset addition based on an initial parameter, where the initial parameter is a value obtained according to the conventional interleaving algorithm.
The first apriori memory block 150 and the second apriori memory block 160 are both used to store the sequence number as a memory address and the interleave address as a memory content in an inverted manner.
Specifically, the first priori storage block 150 stores the sequence number of one path of the priori probability information as a storage address, and the interleave address of one path of the priori probability information as an internal storage inversion. Similarly, the second priori storage block 160 stores the sequence number of the two-way priori probability information as a storage address, and the interleaved address of the two-way priori probability information as a storage internal inversion.
On the basis, when the first prior storage BLOCK 150 and the second prior storage BLOCK 160 are used for prior probability storage, the required storage resources are 6 BLOCK RAMs (36Kbits BLOCK storage units).
In iterative decoding, generally, in the first two times of decoding, the accuracy of the output decoding result is low, and the accuracy of the decoding result can be ensured only when the decoding times reach more than 4 times. Based on this, the decoding information storage block 170 is configured to, during the previous N times of decoding of each iterative decoding, only read and determine posterior probability information to obtain information bits and store the information bits, and is further configured to, after the previous N times of decoding of each iterative decoding is completed, output information bits of the previous decoding, read and determine posterior probability information of the current decoding to obtain information bits and store the information bits;
wherein N is a preset natural number.
Specifically, N may be 2, that is, no information bit is output during the first two decoding.
When the data of the previous priori information decoded frame is long and the data of the next priori information decoded frame is short, the previous priori information decoded frame is not completely read, and the decoding result of the next priori information decoded frame starts to be written into the decoding information storage block 170, which may cause the data of the previous decoding result to be damaged. Therefore, the maximum and minimum frames can be found through simulation, and special treatment is performed for the frames, such as increasing the current iteration number and increasing the waiting time.
Generally, if the decoding of all frame lengths of the dvb-rcs2 is to be supported, the storage resource required by the decoding information storage BLOCK 170 is 0.5 BLOCK RAM, that is, the decoding information storage BLOCK 170 is a 36Kbits storage BLOCK.
The decoding information storage block 170 adopts a time division multiplexing mode, and writes the output of the previous decoding and the iteration result of the current decoding into the same storage unit space without influencing the parallel decoding performance, thereby reducing the block storage consumption.
The above-mentioned two-way parallel decoding storage device 100 provides a storage structure of a parallel decoding algorithm based on windowing and division of decoding sections, and can maximally perform pipeline work in the parallel operation process. And when the backward branch measurement is cached, a circular storage structure (namely annular read-write) is used for controlling the time difference of the read-write, and two subblocks (backward branch measurement) can be cached by using only the resource of the size of one storage subblock (namely the backward branch measurement). Furthermore, the two-way parallel decoding storage device 100 provides a method for satisfying that the decoding output and the internal iterative decoding are stored in the same block storage space without increasing the storage overhead by adopting a time division multiplexing mode of reading the decoding result and storing the decoding result in the current decoding iteration in a covering manner.
With the above arrangement, the two-way parallel decoding storage device supports the decoding of all frame lengths of dvb-rcs2, and the storage of the deinterleaved information requires 1.5 BLOCK RAMs (36Kbits of BLOCK memory units). The total required memory resources are 21 BLOCK RAMs (36Kbits of BLOCK memory cells).
In the verification process, the working clock of the two-way parallel decoding storage device is designed to be 300Mhz and the maximum code length of the prior information is 4792, and under the condition of supporting all code rates of RCS2, iteration is set for 6 times, the length of a windowing (sliding window) is 56 and 78, and the number of parallel segments is 21. The length of data used for initialization of each backward metric is 56. The switching in the middle after each windowing consumes 1 more clock, the parallel operation of two windowing (sliding windows) is used, and the channel highest rate is calculated as follows without considering the switching time and preprocessing time of the windowing (sliding windows):
((4792)/((2396) x2x6)) x 300 is 50 Mbit. The information bandwidth of 50Mbit can be achieved at most through calculation. Of course, time such as switching of tail biting information at the time of decoding is added. The final effective decoding information bandwidth should be slightly lower than the calculated value.
It should be noted that the two-way parallel decoding storage device 100 provided in the present application can be used as a functional module of a decoder.
In addition, the application also provides an electronic device, which comprises a decoder and the upper-path parallel decoding storage device.
The present embodiment also provides a two-way parallel decoding storage method, which is implemented based on the two-way parallel decoding storage apparatus 100 provided above. The method comprises the following steps:
s301, receiving the prior information to be decoded, dividing the prior information to obtain a first decoding block and a second decoding block, storing the first decoding block in the first storage block 120 according to a preset storage manner, and storing the second decoding block in the second storage block 130 according to the preset storage manner.
The preset storage mode comprises sequential storage and storage after interleaving.
S302, when performing iterative decoding on one path of decoding block and two paths of decoding blocks, and each decoding uses a sliding window method, according to the one path of decoding block and the two paths of decoding block read from the first storage block 120 and the second storage block 130 in parallel, and the one path of prior probability information and the two paths of prior probability information of the last decoding read from the first prior storage block 150 and the second prior storage block 160, calculating the respective decoding results of the one path of decoding block and the two paths of decoding block by using a decoding algorithm.
Wherein, the decoding result comprises the posterior probability information of each decoding group.
S303, according to the posterior probability information of one decoding block and one decoding block, an interleaving/de-interleaving algorithm is adopted to obtain one prior probability information of the one decoding block, and the one prior probability information is stored in the first prior storage block 150.
S304, according to the posterior probability information of the two-way decoding group and the two-way decoding group, the interleaving/de-interleaving algorithm is adopted to obtain the two-way prior probability information of the two-way decoding group, and the two-way prior probability information is stored in the second prior storage block 160.
S305, the posterior probability information is determined to obtain information bits, and the information bits are stored in the decoding information storage block 170 according to the deinterleaving address.
The above-mentioned two-way parallel decoding storage method is a partial use method/execution process of the two-way parallel decoding storage device 100, and is not the only storage method, that is, the actual step flow of the upper-way parallel decoding storage method is realized based on the functions of the two-way parallel decoding storage device 100, and the actual realization is the standard.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A double-path parallel decoding storage device is characterized by comprising an external input storage block, a core decoding module, a first prior storage block, a second prior storage block and a decoding information storage block;
the external input storage block comprises a first storage block and a second storage block and is used for dividing received prior information to obtain a path decoding block and a two-path decoding block, storing the path decoding block in the first storage block in a preset storage mode and storing the two-path decoding block in the second storage block in the preset storage mode; the preset storage mode comprises sequential storage and storage after interleaving;
the core decoding module is configured to calculate, by using a decoding algorithm, respective decoding results of the one-way decoding block and the two-way decoding block according to the one-way decoding block and the two-way decoding block read in parallel from the first storage block and the second storage block and the one-way prior probability information and the two-way prior probability information of the last decoding read from the first prior storage block and the second prior storage block by using a sliding window method when performing iterative decoding on the one-way decoding block and the two-way decoding block, where the decoding results include a posterior probability information of each decoding block, and the iterative decoding includes multiple decoding;
the first prior storage block is used for reading the posterior probability information of the one path of decoding block in the core decoding module, and calculating the one path of decoding block and the posterior probability information of the one path of decoding block according to an interleaving/de-interleaving algorithm to obtain and store the one path of prior probability information of the current decoding;
the second prior storage block is used for reading the posterior probability information and the prior information of the two-way decoding block in the core decoding module, and calculating the posterior probability information of the two-way decoding block and the two-way decoding block according to an interleaving/de-interleaving algorithm to obtain and store the two-way prior probability information of the current decoding;
the decoding information storage block is used for reading the posterior probability information in the core decoding module, judging the posterior probability information to obtain information bits, and storing the information bits according to a de-interleaving address.
2. The dual-path parallel decoding storage device according to claim 1, wherein the decoding algorithm comprises a max-log-map algorithm, the core decoding module comprises a first decoding unit and a second decoding unit, and the one-path decoding block and the two-path decoding block each comprise a plurality of decoding segments;
the first decoding unit is configured to, during each decoding, sequentially read each decoding segment in the first storage block and one path of prior probability information of a last decoding stored in the first prior storage block, so as to obtain posterior probability information of each decoding segment by using a decoding algorithm;
the second decoding unit is configured to, during each decoding, sequentially read each decoding segment in the second storage block and two paths of prior probability information of the last decoding stored in the second prior storage block, so as to obtain the posterior probability information of each decoding segment by using a decoding algorithm.
3. The two-way parallel decoding storage device of claim 2, further comprising a first cache block and a second cache block;
the first decoding unit is used for obtaining posterior probability information of each decoding segment through the following steps:
reading one path of prior probability information of the last decoding stored in the first prior storage block and the decoding segment stored in the first storage block, and obtaining the branch metric of each decoding segment in the one path of decoding block by adopting a branch metric calculation formula:
obtaining backward branch metrics of each of the decoding segments by using a backward branch metric calculation formula according to the branch metrics in advance by a sliding window, and writing the backward branch metrics into the first cache block;
according to the branch measurement of each decoding section, adopting a forward branch measurement calculation formula to obtain the forward branch measurement of each decoding section;
sequentially obtaining a path of posterior probability information of each decoding section in the path of decoding group according to the forward branch metric and the backward branch metric of each decoding section;
the second decoding unit is used for obtaining posterior probability information of each decoding segment through the following steps:
reading two-way prior probability information of the last decoding stored in the second prior storage block and the decoding segments stored in the second storage block, and obtaining the branch metric of each decoding segment in the two-way decoding group by adopting a branch metric calculation formula:
obtaining backward branch metrics of each of the decoding segments by using a backward branch metric calculation formula according to the branch metrics in advance by a sliding window, and writing the backward branch metrics into the second cache block;
according to the branch measurement of each decoding section, adopting a forward branch measurement calculation formula to obtain the forward branch measurement of each decoding section;
and sequentially obtaining two-path posterior probability information of each decoding section in the two-path decoding group according to the forward branch metric and the backward branch metric of each decoding section.
4. The apparatus according to claim 3, wherein the sizes of the storage spaces of the first and second buffer blocks are the sum of the maximum value of the backward branch metric and a preset guard interval, and both of them perform read and write operations in a circular read and write manner.
5. The dual-path parallel decoding storage device according to claim 3, further comprising a tail-biting information storage block, wherein the tail-biting information storage block adopts 4-times serialization for data storage;
the first decoding unit and the second decoding unit are also used for writing the forward branch metric and the backward branch metric of a decoding segment into the tail-biting information storage block as tail-biting information every time the decoding of the decoding segment is completed;
the first decoding unit and the second decoding unit are further configured to read tail-biting information stored in the tail-biting information storage block as a decoding initial value before decoding of each decoding segment, so as to calculate posterior probability information.
6. The apparatus according to claim 5, wherein said tail-biting information comprises a first tail-biting and a second tail-biting, said first tail-biting being tail-biting information written by the first decoding unit, said second tail-biting being tail-biting information written by the second decoding unit;
and when the first tail-biting and the second tail-biting are written when decoding an end decoding segment, the first decoding unit reads the second tail-biting as a decoding initial value when calculating the posterior probability information of a next decoding segment, and the second decoding unit reads the first tail-biting as a decoding initial value when calculating the posterior probability information of the next decoding segment.
7. The apparatus according to claim 1, wherein the prior probability information includes interleaving addresses and serial numbers, and the first prior memory block and the second prior memory block are both configured to store the serial numbers as memory addresses and the interleaving addresses as memory contents in an inverted manner.
8. The apparatus according to claim 1, wherein the decoding information storage block is configured to read and determine only the posterior probability information to obtain information bits and store the information bits during the first N times of decoding of each iterative decoding, and is further configured to output the information bits of the previous decoding after the previous N times of decoding of each iterative decoding is completed, read and determine the posterior probability information of the current decoding to obtain information bits and store the information bits;
wherein N is a preset natural number.
9. The two-way parallel transcoding storage device of claim 1, wherein the first storage block and the second storage block each comprise an interleaving block and a sequential block;
the sequence block is used for sequentially storing the one-way decoding group or the two-way decoding group;
and the interleaving block is used for interleaving the one-way decoding group or the two-way decoding group and then storing the two-way decoding group.
10. A two-way parallel decoding storage method implemented based on the two-way parallel decoding storage device of any one of claims 1 to 9, the method comprising:
receiving prior information to be decoded, dividing the prior information to obtain a first decoding block and a second decoding block, storing the first decoding block in the first storage block according to a preset storage mode, and storing the second decoding block in the second storage block according to a preset storage mode; the preset storage mode comprises sequential storage and storage after interleaving;
when the one-way decoding block and the two-way decoding block are subjected to iterative decoding, a sliding window method is used during each decoding, and according to the one-way decoding block and the two-way decoding block which are read from the first storage block and the second storage block in parallel, and the one-way prior probability information and the two-way prior probability information which are read from the first prior storage block and the second prior storage block and are decoded last time, the decoding results of the one-way decoding block and the two-way decoding block are calculated by adopting a decoding algorithm, wherein the decoding results comprise posterior probability information of each decoding block;
obtaining a path of prior probability information of the one path of decoding block by adopting an interleaving/de-interleaving algorithm according to the posterior probability information of the one path of decoding block and the one path of decoding block, and storing the path of prior probability information in a first prior storage block;
obtaining two-way prior probability information of the two-way decoding groups by adopting an interleaving/de-interleaving algorithm according to the posterior probability information of the two-way decoding groups and the two-way decoding groups, and storing the two-way prior probability information in a second prior storage block;
and judging the posterior probability information to obtain information bits, and storing the information bits in a decoding information storage block according to a de-interleaving address.
CN202111261506.4A 2021-10-28 2021-10-28 Double-path parallel decoding storage equipment and method Pending CN113992213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111261506.4A CN113992213A (en) 2021-10-28 2021-10-28 Double-path parallel decoding storage equipment and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111261506.4A CN113992213A (en) 2021-10-28 2021-10-28 Double-path parallel decoding storage equipment and method

Publications (1)

Publication Number Publication Date
CN113992213A true CN113992213A (en) 2022-01-28

Family

ID=79743176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111261506.4A Pending CN113992213A (en) 2021-10-28 2021-10-28 Double-path parallel decoding storage equipment and method

Country Status (1)

Country Link
CN (1) CN113992213A (en)

Similar Documents

Publication Publication Date Title
JP4038518B2 (en) Method and apparatus for efficiently decoding low density parity check code
AU2005225107B2 (en) Method and apparatus for decoding low density parity check code using united node processing
JP4478668B2 (en) Method and system for interleaving in parallel turbo decoders.
JP5479580B2 (en) Method and apparatus for parallel TURBO decoding in LTE
US7373582B2 (en) Apparatus and method for turbo decoding using a variable window size
US7246298B2 (en) Unified viterbi/turbo decoder for mobile communication systems
JP3246484B2 (en) Turbo decoder
JP2007515892A (en) SISO decoder with sub-block processing and sub-block based stop criteria
KR20010072498A (en) Partitioned deinterleaver memory for map decoder
JP4907802B2 (en) Butterfly processor device used for communication decoding
US6434203B1 (en) Memory architecture for map decoder
EP1261139B1 (en) Concurrent memory control for turbo decoders
US7584389B2 (en) Turbo decoding apparatus and method
US8196006B2 (en) Modified branch metric calculator to reduce interleaver memory and improve performance in a fixed-point turbo decoder
US20130007568A1 (en) Error correcting code decoding device, error correcting code decoding method and error correcting code decoding program
US7200798B2 (en) Unified serial/parallel concatenated convolutional code decoder architecture and method
US7234100B1 (en) Decoder for trellis-based channel encoding
JP2006041960A (en) Turbo decoding device and turbo decoding method and program
CN113992213A (en) Double-path parallel decoding storage equipment and method
JP2003152556A (en) Error-correcting and decoding device
JP2006217042A (en) Turbo decoder
KR101066287B1 (en) Apparatus and method for decoding using a map algorithm in mobile communication system
JP3892471B2 (en) Decryption method
JP2006115534A5 (en)
KR100305293B1 (en) Method of calculating log likelihood ratio using minimium memory in a turbo decoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination