CN113258940A - turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium - Google Patents
turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium Download PDFInfo
- Publication number
- CN113258940A CN113258940A CN202110658186.XA CN202110658186A CN113258940A CN 113258940 A CN113258940 A CN 113258940A CN 202110658186 A CN202110658186 A CN 202110658186A CN 113258940 A CN113258940 A CN 113258940A
- Authority
- CN
- China
- Prior art keywords
- data
- decoding
- target
- decoder
- target component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2957—Turbo codes and decoding
Landscapes
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Error Detection And Correction (AREA)
Abstract
The invention relates to the technical field of communication, and provides a turbo decoding method, a device, decoding equipment and a storage medium, which are applied to the decoding equipment, wherein the decoding equipment comprises a sub-decoder and a plurality of component decoders, and the method comprises the following steps: acquiring data to be decoded and the number of a plurality of component decoders; the number, the length of the data to be decoded and a preset interleaving rule determine the target number of the target component decoders distributed for the sub-decoders; and inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is met, and obtaining the decoded data. The invention can give consideration to different code patterns of various different protocols, automatically adapt to the requirements of the code patterns and enhance the compatibility and flexibility of turbo decoding.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a turbo decoding method, apparatus, decoding device, and storage medium.
Background
Turbo codes are developed on the basis of convolutional codes and an iterative idea, and dual binary convolutional Turbo codes, unlike conventional Turbo codes, encode two information bits per clock beat. The iterative idea of Soft input Soft output SISO (SISO) is adopted at a receiving end, the advantages of the cascade codes are exerted, random coding is approximately realized, and short codes are changed into long codes. The turbo code achieves performance very close to the channel capacity limit by fully exploiting the randomness conditions in the channel coding scheme. The principle of the interleaver is also introduced in practical application to improve the performance of the turbo code.
Different protocols have different code patterns, and different code patterns have different interleaving rules during decoding, so that the decoding parallelism is different.
Disclosure of Invention
The invention aims to provide a turbo decoding method, a turbo decoding device, turbo decoding equipment and a storage medium, which can give consideration to different code patterns of different protocols, realize the requirement of automatically adapting to the code patterns and enhance the compatibility and flexibility of a decoding scheme.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a turbo decoding method applied to a decoding apparatus including a sub-decoder and a plurality of component decoders, the method including: acquiring data to be decoded and the number of the component decoders; determining the target number of target component decoders distributed for the sub-decoders according to the number, the length of the data to be decoded and a preset interleaving rule; and inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is met, and obtaining the decoded data.
In a second aspect, the present invention provides a turbo decoding apparatus applied to a decoding device including a sub-decoder and a plurality of component decoders, the apparatus comprising: the acquisition module is used for acquiring data to be decoded and the number of the component decoders; a determining module, configured to determine, according to the number, the length of the data to be decoded, and a preset interleaving rule, a target number of target component decoders allocated to the sub-decoder; and the decoding module is used for inputting the data to be decoded into the sub-decoder to carry out iterative decoding until a preset condition is met, so as to obtain decoded data.
In a third aspect, the invention provides a decoding device implementing the turbo decoding method as described above when executing the computer program.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a decoding device, implements a turbo decoding method as described above.
Compared with the prior art, the method and the device determine the target number of the target component decoders which can be used in iterative decoding according to the number of the component decoders, the length of the data to be decoded and the preset interleaving rule, then input the data to be decoded into the sub-decoders for iterative decoding until the preset condition is met, and obtain the decoded data, so that the required target component decoders are determined according to different interleaving rules adopted by different code types in decoding, different code types of various different protocols are considered, the requirements of the code types are automatically adapted, and the turbo decoding compatibility and flexibility are enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart illustrating a turbo decoding method according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of determining the target number of the target component decoder according to an embodiment of the present invention.
Fig. 3 is a diagram illustrating an example of an address access conflict occurring when writing data into the memory C according to an embodiment of the present invention.
Fig. 4 is a diagram illustrating an example of an address access conflict occurring when the memory C is read according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating the sub-step of step S120 in fig. 1 according to an embodiment of the present invention.
Fig. 6 is a schematic flowchart of sub-step S1201 in fig. 5 according to an embodiment of the present invention.
Fig. 7 is a diagram illustrating an example of one-time iterative decoding according to an embodiment of the present invention.
Fig. 8 is a schematic flowchart of a sub-step of step S12010 in fig. 6 according to an embodiment of the present invention.
Fig. 9 is a diagram illustrating a first decoding in a single iterative decoding according to an embodiment of the present invention.
Fig. 10 is a flowchart illustrating an iterative decoding method applied to each target component decoder according to an embodiment of the present invention.
Fig. 11 is a schematic diagram of state transition of tail-biting convolution according to an embodiment of the present invention.
Fig. 12 is a schematic process diagram of a first recursion provided in the embodiment of the present invention.
Fig. 13 is a schematic process diagram of a second recursion provided in the embodiment of the present invention.
Fig. 14 is a flowchart illustrating a hard decision process according to an embodiment of the present invention.
Fig. 15 is a flowchart illustrating a method for implementing early-end iterative decoding according to an embodiment of the present invention.
Fig. 16 is a flowchart illustrating a method for determining that a preset condition is satisfied according to an embodiment of the present invention.
Fig. 17 is a block diagram of a turbo decoding apparatus according to an embodiment of the present invention.
Fig. 18 is a block diagram illustrating a decoding apparatus according to an embodiment of the present invention.
Icon: 10-a decoding device; 11-a decoding controller; 12-a memory; 13-a bus; 14-a communication interface; 15-sub-decoder; 100-turbo decoding means; 110-an obtaining module; 120-a determination module; 130-decoding module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate an orientation or a positional relationship based on that shown in the drawings or that the product of the present invention is used as it is, this is only for convenience of description and simplification of the description, and it does not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
Since the 60 s of this century, mankind has put hundreds of communication and broadcast satellites into high orbit, which have been the mainstay in achieving international telecommunication and television transmissions.
The characteristics of high orbit satellite mobile communication service derive from the conditions for carrying out communication service using 35800km of geostationary satellite above the equator. At this elevation, a satellite may cover half of the earth, forming a regional satellite communication system that may serve anywhere within its satellite coverage. However, the high orbit satellites must be separated by a certain distance, so the number of the high orbit satellites is small.
The disadvantages of the high rail are also relatively obvious: in free space, the signal is transmitted in a long distance to bring large time delay; the orbit resources are tight, and the number of users per high orbit satellite service is huge.
Based on the above disadvantages of high orbit, it is necessary to improve the data throughput of the high orbit satellite and increase the number of service users; the performance of signal processing is improved, signal repeated transmission is avoided, and burden is relieved for a channel; time delay caused by data processing is reduced, and working efficiency is improved; the coding and decoding are one of the most important technologies in the high-orbit satellite communication, and are also bottlenecks in improving the throughput and performance of the high-orbit satellite, so that the throughput and performance of the coding and decoding are improved, and the throughput and performance of the high-orbit satellite can be greatly improved.
The current technical protocol DVB-RCS2 of the high track has turbo codes as the main coding and decoding algorithm, which can improve the self-error-correcting capability of the high track, and the turbo codes are dual binary, 16-state turbo codes, which use the characteristic of tail-biting convolution.
However, as the performance of turbo codes is improved, the structure is complicated, and a problem is brought. Because the interleaving mode of the turbo code limits data access during decoding, the iterative decoding parallelism is not high, the decoding time delay is large, and the throughput is not high; in the decoding process, a large amount of memories are consumed in the reverse recursion process, so that hardware resources are consumed greatly, and the decoding difficulty is increased. At present, a sliding window decoding algorithm and a full parallel decoding algorithm are used more in a turbo decoding scheme, the sliding window decoding algorithm is a serial decoding algorithm, the time delay is large, and the throughput is not high; the full parallel decoding algorithm is effective in improving the decoding parallelism, but is limited by an interleaver, most code patterns specified by a protocol cannot use the full parallel decoding algorithm, and the parallelism (i.e. the number of component decoders running in parallel) of each code pattern is different due to different interleaving rules of each code pattern.
In view of this, embodiments of the present invention provide a turbo decoding method, apparatus, decoding device, and storage medium, which can automatically determine the number of component decoders that can run in parallel according to an interleaving rule, thereby taking into account different code patterns of various different protocols, implementing requirements for automatically adapting to the code patterns, enhancing compatibility and flexibility of a decoding scheme, and simultaneously, on the premise of ensuring decoding performance, improving parallelism, improving throughput, reducing delay, and reducing data storage capacity, which will be described in detail below.
Referring to fig. 1, fig. 1 is a schematic flow chart of a turbo decoding method according to an embodiment of the present invention, the method including the following steps:
step S100, obtaining data to be decoded and the number of the component decoders.
In this embodiment, as a specific implementation manner, the data to be decoded may be data after processing the data received by the decoding device, where the processing includes, but is not limited to, normalization processing, rate de-matching, or demodulation processing.
In this embodiment, the decoding apparatus includes a plurality of component decoders, and the number of component decoders may be the number of maximum component decoders that the decoding apparatus can support.
It should be noted that, as a specific implementation, the decoding device may be set in advance, or rate matching may be performed according to the code rate.
Step S110, determining the target number of the target component decoders allocated to the sub-decoders according to the number, the length of the data to be decoded, and a preset interleaving rule.
In this embodiment, the decoding device further includes a sub-decoder, where the sub-decoder may include a plurality of component decoders that can operate in parallel, the data to be decoded is a turbo code, each code pattern of the turbo code corresponds to an interleaving sequence after being interleaved by using a preset interleaving rule, and the interleaving sequence determines the parallelism (i.e., the number of component decoders in the sub-decoder).
In this embodiment, the code patterns of the data to be decoded are different, and the preset interleaving rules adopted by the data to be decoded are also different, so that the target numbers of the target component decoders allocated to the sub-decoders are also different. The embodiment of the invention can determine the target number of the target component decoders of the sub-decoders according to the preset interleaving rule, thereby being compatible with different code patterns of data to be decoded.
Step S120, inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is met, and obtaining the decoded data.
In this embodiment, after the data to be decoded is input to the sub-decoder, the target component decoder in the sub-decoder performs iterative decoding in parallel, thereby improving the efficiency of iterative decoding of the data to be decoded.
The method provided by the embodiment of the invention determines the target number of the target component decoder used by iterative decoding for different code types of different protocols adopting different interleaving rules, can automatically calculate the parallelism of the iterative decoding (namely the target number of the target component decoder) aiming at different types of interleavers adopting different interleaving rules and different code types, does not need to artificially analyze the type and the structure of the interleavers, thereby taking different code types of different protocols into consideration, realizing the requirement of automatically adapting to the code types, and enhancing the compatibility and the flexibility of a decoding scheme.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating the process of determining the target number of the target component decoder according to the embodiment of the present invention, and step S110 includes the following sub-steps:
and a substep S1101 of determining the number as a candidate number.
And a substep S1102, dividing the data to be decoded into a plurality of data segments according to the number to be selected and the length of the data to be decoded.
And a substep S1103 of interleaving each of the plurality of data segments according to a preset interleaving rule to obtain a memory address sequence after interleaving each of the data segments.
In this embodiment, as a specific implementation manner, if the decoding is performed serially, the memory and the interleaving sequence are respectively in two different memories, and one data is read (or written) at each time, so that only one read (or write) operation is performed on the memory at each time, and the problems of address contention and access conflict do not occur.
In this embodiment, as another specific implementation manner, if the decoding is parallel decoding, for example, when M component decoders decode in parallel, M data (also referred to as extrinsic information) are simultaneously and alternately written into or read from the memory at a certain time. Assuming that M =4, data a to D of 4 component decoders need to be written into memories a to D, respectively, where j represents a data offset in the 4 component decoders, address contention occurs and memory access conflicts occur because data of the component decoder a and the component decoder D need to be written into the memory C at the same time. Referring to fig. 3, fig. 3 is a diagram illustrating an example of an address access conflict occurring when writing data into the memory C according to an embodiment of the present invention. Similarly, for a parallel 4-component decoder, at a certain time, it is necessary to read the jth data of the blocks a and d from the memory C at the same time, which is not allowed in practical hardware implementation, so that the problem of address contention may occur, and further, a memory access conflict may be caused, please refer to fig. 4, which is an exemplary diagram of the address access conflict occurring when reading the memory C according to the embodiment of the present invention.
In order to avoid address access conflict, taking M component decoders as an example, address conflict is not generated when the M component decoders access data in the memory through a preset interleaving rule at any time in the decoding process.
The condition that M component decoders can be in parallel is:the jth data of the u-th and v-th component decoders are accessed simultaneouslyWhen asking, the memory access address has no conflict. Wherein u and v represent any two component decoders of the M component decoders, and. Wherein:
: j + uL jth data in the u-th component decoder,and indicating the data storage address after the j-th data interleaving.
: j + vL in the v component decoderThe (j) th data of (2),and indicating the data storage address after the j-th data interleaving.
And a substep S1104 of taking the number to be selected as the target number if no access conflict exists between every two of the plurality of memory address sequences.
In this embodiment, that no access conflict exists between every two of the plurality of memory address sequences means that no access conflict exists when the memory addresses of the data with the same position offset of the plurality of data segments are accessed by any two component decoders.
And a substep S1105, if there is an access conflict between any two storage addresses in the plurality of storage address sequences, decreasing the number to be selected, replacing the number to be selected with the decreased number to be selected, repeatedly performing segmentation on the data to be decoded, and interleaving each data segment according to a preset interleaving rule until a target number is determined.
In this embodiment, the number to be selected is decreased progressively, the decreased number to be selected replaces the number to be selected, and substeps S1102-S1105 are repeatedly executed until the target number is determined.
The method provided by the embodiment of the invention starts from the number of the component decoders, and finally obtains the target number of the target component decoders meeting the preset interleaving rule by adopting a mode of gradually decreasing the number of the component decoders, thereby realizing the rapid determination of the most appropriate target number and improving the efficiency of iterative decoding.
Referring to fig. 5, fig. 5 is a schematic flowchart of a sub-step of step S120 in fig. 1, where the step S120 includes the following sub-steps:
and a substep S1201, inputting the data to be decoded and the preset prior probability data into the sub-decoder for iterative decoding to obtain the extrinsic information data and the posterior probability data.
In this embodiment, iterative decoding may be performed multiple times until a preset condition is satisfied, when iterative decoding is performed for the first time, the preset prior probability data is a preset initial value, for example, the initial value is all 0, and when iterative decoding is performed for other than the first time, the preset prior probability data of the iterative decoding is obtained by performing deinterleaving according to the extrinsic information data output by the previous iterative decoding.
And a substep S1202 of performing deinterleaving on the extrinsic information data, replacing the preset prior probability data with the deinterleaved result, and then performing next iterative decoding until a preset condition is met.
In this embodiment, the external information data is deinterleaved, the preset prior probability data is replaced with the deinterleaved result, and then the substeps S1201 to S1202 are repeated, and when the preset condition is satisfied, the iterative decoding is ended.
In this embodiment, each iterative decoding includes a first decoding and a second decoding, and on the basis of fig. 5, an embodiment of the present invention further provides a specific implementation manner for obtaining extrinsic information data and a posterior probability data through the first decoding and the second decoding, please refer to fig. 6, where fig. 6 is a schematic flow diagram of a substep S1201 in fig. 5 according to an embodiment of the present invention, and the substep S1201 includes the following substeps:
and a substep S12010 of inputting the original data, the first check data and the preset prior probability data to the sub-decoder for first decoding to obtain intermediate external information data and intermediate posterior probability data.
In this embodiment, the data to be decoded includes original data and first parity data and second parity data corresponding to the original data. Since the turbo code is a binary input turbo code, for each symbol in the original data, each symbol is represented by two bits, and each symbol corresponds to two check data: the first check data and the second check data respectively comprise two bits, the first bit of the first check data and the first bit of the second check data are both first check bits, and the second bit of the first check data and the second bit of the second check data are both second check bits.
And a substep S12011, performing deinterleaving on the intermediate external information data to obtain intermediate prior probability data.
And a substep S12012 of interleaving the original data according to a preset interleaving rule, and inputting the interleaved original data, the second check data and the intermediate prior probability data to the sub-decoder for second decoding to obtain the extrinsic information data and the posterior probability data.
In this embodiment, after the output extrinsic information data of the second decoding of each iterative decoding is deinterleaved, the interleaving result is used as the prior probability data of the next iterative decoding. Referring to fig. 7, fig. 7 is a diagram illustrating an example of one-time iterative decoding according to an embodiment of the present invention. In fig. 7, SISIO1 and SISIO2 respectively represent first decoding and second decoding of one iterative decoding,: indicating that an interleaving operation is performed according to a preset interleaving rule,: indicating that the de-interleaving operation is performed according to the preset interleaving rule,representing the original data,Representing the data resulting from interleaving the original data,representing the first check data,Representing second parity data, i representing the ith iterative decoding,andposterior probability data respectively representing the output of the first decoding and the second decoding of the ith iterative decoding,prior probability data representing the first decoding of the (i + 1) th iterative decoding,prior probability data representing a second decoding of the ith iterative decoding,、respectively representing the extrinsic information data of the first decoding and the second decoding of the ith iterative decoding. For convenience of description, the embodiment of the present invention expresses a symbol sequence to be encoded as:n represents the number of code symbols (i.e., the sequence length), and after BPSK modulation,the sequence of the coded symbols after BPSK modulation is represented as,,,,Wherein, in the step (A),a symbol representing the data to be transmitted,,respectively representing a first check bit and a second check bit, and at the receiving end, the receiving sequence corresponding to the transmitting sequence x is,,Wherein, in the step (A),represents one symbol in the original data; one symbol includes 2 bits:and,,byInterweaving to obtain;wherein, in the step (A),,which represents the first check bit, is,is the second parity bit;wherein, in the step (A),,is the first check bit, and the second check bit,is the second parity bit.
On the logic level, the extrinsic information data output by SISO1 of the ith iteration decoding isThe posterior probability data is(ii) a The extrinsic information data output from SISO2 of the ith iterative decoding isPosterior probability data(ii) a Logically, i-th iteration decoded SISO1 outputThrough crossingIs obtained after weavingAnd the input signal is input to SISO2 for iterative use in the ith iterative decoding. Similarly, the output of SISO2 for the ith iterative decodingIs obtained after deinterleavingAnd is input to SISO1 at i +1 iterations of decoding.
On the physical level, SISO1 and SISO2 are completed by the same hardware structure, when the ith iteration decoding is performed, SISO1 and SISO2 are performed in series, SISO1 is performed first, SISO2 is performed later, then the (i + 1) th iteration decoding is performed, similarly, SISO1 and SISO2 are performed in series, SISO1 is performed first, and SISO2 is performed later.
In this embodiment, when there are multiple target component decoders included in the sub-decoder, in order to implement concurrent processing of the multiple target component decoders, an embodiment of the present invention further provides a specific implementation manner of the concurrent processing, please refer to fig. 8, where fig. 8 is a schematic flow diagram of a sub-step S12010 in fig. 6 according to the embodiment of the present invention, and the sub-step S12010 includes the following sub-steps:
substeps S12010-10, dividing the raw data, the first check data and the preset prior probability data into a plurality of raw data segments, a plurality of first check data segments and a plurality of prior probability data segments respectively according to the target number.
And substeps 12010-11, inputting each original data segment and each corresponding first check data segment and each prior probability data segment into each target component decoder for first decoding to obtain a middle extrinsic information data segment and a middle posterior probability data segment output by each target component decoder.
And a substep S12010-12 of combining the plurality of intermediate extrinsic information data segments output from all the target component decoders to obtain intermediate extrinsic information data.
And a substep S12010-13 of combining the plurality of intermediate posterior probability data segments output by all the target component decoders to obtain intermediate posterior probability data.
According to the method provided by the embodiment, the original data, the first check data and the preset prior probability data are divided into a plurality of original data segments, a plurality of first check data segments and a plurality of prior probability data segments, so that the storage capacity required by iterative decoding is reduced, the hardware storage capacity is effectively saved, meanwhile, a plurality of target component decoders process the plurality of original data segments and the corresponding plurality of first check data segments and the plurality of prior probability data segments in parallel, the parallelism and throughput of sub-decoders are increased, and the iterative decoding time delay is reduced.
Referring to fig. 9, fig. 9 is an exemplary diagram of first decoding in one iterative decoding provided by the embodiment of the present invention, where the number of targets is M, that is, a sub-decoder includes M target component decoders, and divides original data, first check data, and predetermined prior probability data into M original data segments, M first check data segments, and M prior probability data segments, and each target component decoder is responsible for decoding according to one original data segment, the corresponding first check data segment, and prior probability data segment, and finally combines all intermediate external information data segments output by the target component decoders into intermediate external information data output by the first decoding, and combines all intermediate posterior probability data segments output by the target component decoders into intermediate posterior probability data output by the first decoding.
In this embodiment, in order to reduce the amplitude of data and the storage capacity of stored data, and reduce memory consumption and computational complexity, an embodiment of the present invention further provides another iterative decoding method applied to each target component decoder, please refer to fig. 10, where fig. 10 is a flowchart illustrating the iterative decoding method applied to each target component decoder provided in the embodiment of the present invention, and sub-steps S12010-11 include the following sub-steps:
in substeps 12010-110, a middle extrinsic information data unit and a middle posterior probability data unit corresponding to each processing window of each target component decoder are calculated according to the original data unit, the first verification data unit and the prior probability data unit corresponding to each processing window of each target component decoder.
In this embodiment, each target component decoder is divided into a preset number of processing windows, and each original data segment, first check data segment and prior probability data segment respectively includes a preset number of original data units, a preset number of first check data units and a preset number of prior probability data units; a processing window is used to process an original data unit and the corresponding first check data unit and prior probability data unit.
In this embodiment, for each target component decoder, a preset number of processing windows are divided, each processing window processes the first original data unit in each original data segment and the corresponding first check data unit and prior probability data unit, and serial decoding is performed among the preset number of processing windows, so that the decoding length is effectively reduced, the storage amount of data in the decoding process is reduced again, and the calculation amount is also reduced.
In the present embodiment, the iterative decoding includes a plurality of times, and for convenience of explanation, the target component decoder includes the 1 st to mth target component decoders, each of which includes the 1 st to qth processing windows, where M and Q are positive integers, for example, the description is made.
For the Y processing window of the xth target component decoder of the lth iterative decoding, where L is any iterative decoding, the method for calculating the intermediate extrinsic information data unit and the intermediate posterior probability data unit corresponding to the processing window may be:
firstly, an original data unit, a first check data unit and a prior probability data unit corresponding to a Y processing window of an Xth target component decoder of the L-th iterative decoding are respectively used as a target original data unit, a target first check data unit and a target prior probability data unit, wherein L is a positive integer, X is more than or equal to 1 and less than or equal to M, and Y is more than or equal to 1 and less than or equal to Q.
And secondly, obtaining a state transition metric value according to the target original data unit, the target first verification data unit and the target prior probability data unit.
In this embodiment, the state transition metric value may be calculated by using the following formula:
wherein, in the step (A),the value of the state transition metric is represented,andrespectively representing the states of the target component decoders at the time k-1 and the time k, since the component decoders perform sequential decoding according to the symbol sequence, each time processes one symbol, i.e., each time recurs the state of one symbol, k is also the kth symbol in the symbol sequence, i.e., the kth symbol is processed at the time k-1, the kth symbol is processed at the time k, j =1 represents the first decoding of the L-th iterative decoding, j =2 represents the second decoding of the L-th iterative decoding,and2 bits each representing one symbol in the target original data unit,and2 bits each representing one symbol in the transmission data unit corresponding to the target original data unit,,first and second parity bits respectively representing transmission data corresponding to a target original data unit,andfirst parity data is represented at j =1, second parity data is represented at j =2,represents the target prior probability, where z =1,2, 3.
Since the turbo code is a binary input tail-biting convolutional code, and there are 16 states in the convolutional encoding process, an embodiment of the present invention further provides a schematic diagram of state transition of tail-biting convolution, please refer to fig. 11, fig. 11 is a schematic diagram of state transition of tail-biting convolution provided in an embodiment of the present invention, as can be seen from fig. 11, the state transition of tail-biting convolution is an end-to-end ring shape, and can be always in a closed-loop transition, that is, the state of the turbo code is a closed-loop cyclic state transition process, and according to this characteristic, infinite continuous transfer of states can be completed. According to the characteristic of the tail-biting convolution of the turbo code, the information in the subsequent recursion can be initialized by fully utilizing the previous information in the recursion process. In this embodiment, the recursion includes a first recursion and a second recursion, the first recursion is also called alpha forward recursion, and when the alpha forward recursion is performed, the initialization is performed in a loop state feedback manner, so that even if the initial state is unknown, a good decoding performance can be obtained. The second recursion is also called beta reverse recursion, when the beta reverse recursion is carried out, a mode of sliding window reverse recursion initial value is not adopted, a mode of saving cycle boundary state is adopted, the boundary state is initialized by the saved state in the next iteration, and the time for calculating the initial value is saved by the beta reverse recursion. The first recursion and the second recursion are described in detail below.
Thirdly, performing first recursion according to the first initial value and the state transition metric value to obtain a first metric value, wherein when L =1 and Y =1, the first initial value is a first preset value; when L ≠ 1, and Y =1, and X =1, then the first initial value is a first metric value of a Q-th processing window of the mth target component decoder at the time of the L-1-th iterative decoding; when L ≠ 1, Y =1, and X ≠ 1, the first initial value is a first metric value of a Q-th processing window of the X-1-th target component decoder during the L-1-th iterative decoding; when Y ≠ 1, the first initial value is the first metric value of the Y-1 processing window of the Xth target component decoder during the Lth iterative decoding.
In this embodiment, the first preset value may be obtained as follows:
wherein, in the step (A),and M is a first preset value, M is the target number of the target component decoder, and N is the number of code elements representing the target original data unit.
The first metric value may be obtained by:
wherein, in the step (A),is a first measure of the value of the first metric,is a first initial value of the first parameter,in order to be a value of a state transition metric,,,。
the first iteration starts with the first processing window, the first processing window calculates to the last set of state metric values (i.e., the first metric values of the first processing window) (one set is 16 state metric values), the last set of state metric values is passed to the second processing window as initial values for the second processing window, the first metric value of the second processing window is then calculated until the last set of state metric values (i.e., the first metric values of the second processing window) is calculated, the last set of state metric values is passed to the third processing window, and so on until the last processing window, the last set of state metric values (i.e., the first metric values of the last processing window) of the last processing window is passed to the second iteration as initial values for the second iteration.
To more clearly express the process of the first recursion, please refer to fig. 12, fig. 12 is a schematic diagram of the process of the first recursion according to the embodiment of the present invention, in fig. 12, a total of M target component decoders: for example, the first recursion to the mth component decoder includes the following cases: case 1: a first processing window of the 1 st component decoder of the first iteration is an example of L =1 and Y =1, where the first initial value is a first preset value; case 2: a first processing window of the 1 st component decoder of the second iteration is an example of L ≠ 1, and Y =1, and X =1, and the first initial value is a first metric value of the Q-th processing window of the mth component decoder at the time of the first iteration decoding; case 3: a first processing window of the 2 nd component decoder of the second iteration is an example where L ≠ 1, and Y =1, and X ≠ 1, then the first initial value is the first metric value of the qth processing window of the 1 st component decoder at the time of the 1 st iterative decoding; case 4: the 2 nd processing window of the 1 st component decoder is an example of Y ≠ 1, and the first initial value is the first metric value of the 1 st processing window of the 1 st target component decoder at the time of the first iterative decoding.
In this embodiment, in the first recursion, continuous recursion inside the target component decoder is adopted, and the recursive value is subjected to cyclic boundary processing, so that compared with a full-parallel decoding algorithm, the decoding convergence speed is increased, and the error code performance of decoding is ensured.
It should be noted that, for the prevention ofOverflow of too large a value will beThe dynamic normalization operation is performed in the first recursion process, and the normalization principle is that each time one code element in one target original data unit is calculatedBy the formulaNormalizationWherein. After normalizationThe amplitude of the method is obviously reduced, the calculation amount is reduced, the storage amount can be reduced, and the calculation efficiency is effectively improved.
Fourthly, second recursion is carried out according to a second initial value and the state transition metric value to obtain a second metric value, wherein when L =1 and Y = Q, the second initial value is a second preset value; when L ≠ 1, and Y = Q, and X = M, then the second initial value is the second metric value of the 1 st processing window of the 1 st target component decoder at the time of the L-1 st iterative decoding; when L ≠ 1, and Y = Q, and X ≠ M, then the second initial value is the second metric value of the 1 st processing window of the X +1 st target component decoder at the time of the L-1 st iterative decoding; when L =1 and Y ≠ Q, the second initial value is a second preset value; when L ≠ 1, and Y ≠ Q, the second initial value is the second metric value of the Y + 1-th processing window of the Xth target component decoder during the L-1-th iterative decoding.
In this embodiment, the second preset value may be obtained as follows:
wherein, in the step (A),and the second preset value is M, the target number of the target component decoder is M, the number of code elements of the target original data unit is N, and the position of the second preset value in the whole reverse recursion is g.
The second metric value may be calculated using the following formula:
wherein, in the step (A),is a second measure of the value of the second metric,is a second initial value of the first initial value,in order to be a value of a state transition metric,,,。
to more clearly express the process of the second recursion, please refer to fig. 13, fig. 13 is a schematic diagram of the process of the second recursion according to the embodiment of the present invention, and in fig. 13, each processing window in the first target component decoder of the first iteration is initialized with an equal probability log (1/16)The initial values of the 16 states of (a), wherein,is the first value of the reverse recursion for each processing window, where g is the position of the previous second metric value of the second recursion and k is the ordinal number of the currently calculated second metric value. Then storingAnd (2) recursively calculating a final set of state metric values (i.e., second metric values), then transferring 16 states of the second metric values to a second iteration, and initializing a second initial value of the second iteration with the states, transferring a value stored in the second iteration to a third iteration, and so on, wherein the transfer manner is as shown in fig. 13, and in fig. 13, a target component decoder has M: the first iteration is iterative decoding when L =1, and the second iteration is iterative decoding when L =2, for example, the second recursion includes the following cases: case 1: if the qth processing window of the 1 st component decoder of the first iteration is one example of L =1 and Y = Q, the second initial value is the second preset value; case 2: if the qth processing window of the mth component decoder of the second iteration is an example of L ≠ 1, and Y = Q, and X = M, then the second initial value is the second metric value of the 1st processing window of the 1st target component decoder at the time of the first iteration decoding; case 3: the Q processing window of the 1 st component decoder of the second iteration isAn example where L ≠ 1, and Y = Q, and X ≠ M, then the second initial value is the second metric value of the 1 st processing window of the 2 nd component decoder at the time of the first iterative decoding; case 4: when the 2 nd processing window in the first iterative decoding is an example of L =1 and Y ≠ Q, the second initial value is a second preset value; case 5: the 2 nd processing window of the 1 st component decoder at the time of the second iterative decoding is an example of L ≠ 1, and Y ≠ Q, and the second initial value is the second metric value of the 3 rd processing window of the 2 nd component decoder at the time of the first iterative decoding.
In this embodiment, during the second recursion, compared with the sliding window algorithm, the process of calculating the initial value by reverse estimation of the sliding window is removed and replaced by the stored value of the last iteration, and this process ensures the convergence speed of decoding, ensures the performance of decoding, and simultaneously reduces the decoding calculation amount, improves the decoding throughput, and reduces the time delay.
It should be noted that, for the prevention ofOverflow of too large a value will beThe dynamic normalization operation is performed in the second recursion process, and the principle of normalization is that each time one code element in one target original data unit is calculatedBy the formulaTo normalizeWherein. After normalizationThe amplitude of the method is obviously reduced, the calculation amount is reduced, meanwhile, the storage space can be reduced, and the calculation efficiency is effectively improved.
In this embodiment, the first recursion and the second recursion both use boundary state metric feedback, and the metric value obtained by the previous iteration recursion can be well used in the next iteration recursion, so that the problem of unknown loop state is solved, and the infinite continuous transmission of the state transition metric value is completed.
And fifthly, obtaining an intermediate posterior probability data unit corresponding to the target processing window according to the first metric value, the second metric value and the state transition metric value.
In this embodiment, the intermediate extrinsic information data unit may be calculated by the following formula:
wherein, in the step (A),the intermediate a posteriori probability data unit at the k-th time instant,is a first metric value at the k-1 time;for the state transition metric value at the k-th time instant,,is the second metric at time k + 1,the value of the state transition metric at time k, z being 00.
And sixthly, obtaining a middle external information data unit corresponding to the target processing window according to the channel reliability, the target original data unit, the middle posterior probability data unit and the target prior probability data unit.
In this embodiment, the intermediate extrinsic information data unit may be calculated by the following formula:
wherein:
for the intermediate extrinsic information data unit,z =1,2,3 for the intermediate posterior probability data unit at the k-th time instant when the input is z, which is a relative value, is the intermediate posterior probability data unit when z =1,2,3 minus the probability value that the input data is 0;for the target prior probability data unit,is the reliability of the channel and is,,is the noise variance of the gaussian channel and alp is a normalization parameter that can be set to 0.7 or 0.75.
In this embodiment, the processing methods of the first to sixth steps are also referred to as max-log-map decoding algorithm.
It should be noted that the target original data unit may further include a plurality of symbols, each symbol includes two bits, and thus, each symbol is also referred to as a bit pair. When the target original data unit comprises a plurality of code elements, each code element needs to perform the first to sixth 6 steps to obtain an intermediate posterior probability data code element corresponding to each code element, and finally all the code elements of the target original data unit are combined to obtain the intermediate posterior probability data unit.
It should be further noted that, when the target original data unit includes a plurality of code elements, the first metric value of the target processing window is a first metric value obtained by performing a first recursion on a last code element in the target original data unit corresponding to the target processing window, and the second metric value of the target processing window is a second metric value obtained by performing a second recursion on a first code element in the target original data unit corresponding to the target processing window.
It should be further noted that, when the target original data unit includes a plurality of symbols, for example, 1 to T symbols, for the first recursion, when L =1, Y =1, and Z =1 (i.e. currently is the 1 st symbol of the 1 st processing window of the first iterative decoding), the first initial value is the first preset value; when L =1, Y =1 and Z ≠ 1 (1 < Z ≦ T), the first initial value is a first metric value obtained after processing the Z-1 th symbol of the Y-th processing window of the X-th target component decoder during the L-th iterative decoding; for a second recursion, when L =1, and Y = Q, and Z = T, then the second initial value is a second preset value; when L =1, Y = Q, and Z ≠ T (1 ≦ Z < T), the second initial value is the second metric value obtained after processing the Z +1 th symbol of the Y-th processing window of the X-th target component decoder during the L-th iterative decoding.
Substeps 12010-111, combining the plurality of intermediate extrinsic information data units corresponding to all processing windows of each target component decoder to obtain intermediate extrinsic information data segments corresponding to each target component decoder.
Substeps 12010-112, combining the plurality of intermediate posterior probability data units corresponding to all processing windows of each target component decoder to obtain the intermediate posterior probability data segment corresponding to each target component decoder.
In this embodiment, in order to fully utilize the characteristic of binary input of turbo codes and reduce the situation of hard decision misjudgment, an embodiment of the present invention further provides a specific implementation manner of hard decision, please refer to fig. 14, where fig. 14 is a schematic flow chart of hard decision provided by the embodiment of the present invention, and the method includes the following steps:
step S200, the posterior probability data is de-interleaved to obtain a plurality of decision values, wherein each decision value corresponds to a decoding result.
In this embodiment, the posterior probability data after deinterleaving is expressed as:wherein for each value of k there isThree values, one for each z, i.e.。
Step S210, if the plurality of decision values are all less than or equal to 0, taking the preset decoding result as a target decoding result of the iterative decoding.
In the present embodiment, the hard decision is performed in the following orderFinding the maximum value for G and obtaining the sequence number of the maximum value,if index =1, the decoding result is [0,0 ]](ii) a If index =2, the decoding result is [0, 1 ]](ii) a If index =3, the decoding result is [1,0 ]]Otherwise, the decoding result is [1,1 ]]。
In this embodiment, ifAre all less than 0, the maximum value of G is 0, the decoding result corresponding to 0 is the preset decoding result, where the preset decoding result is [1,1 ]]。
Step S220, if at least one of the plurality of decision values is greater than 0, the obtained decoding result corresponding to the largest decision value is used as a target decoding result of the iterative decoding.
In this embodiment, ifIf at least one index is greater than 0, the decoding result corresponding to the index with the largest value is the target decoding result.
The method provided by the embodiment of the invention utilizes the relationship between a plurality of judgment values and zero to fully verify, adopts a multi-party correction mode, reduces the possibility of misjudgment during hard judgment, and reduces the situation of misjudgment of hard judgment.
In this embodiment, iterative decoding generally sets the maximum iteration number, but some data converge to a correct symbol without reaching the maximum iteration number according to different channel qualities, and in order to effectively reduce the iterative decoding number, the iteration is exited in advance under the condition that the data has converged, an embodiment of the present invention further provides a specific implementation manner of ending the iterative decoding in advance, please refer to fig. 15, where fig. 15 is a flowchart of an implementation method of ending the iterative decoding in advance, where the method includes the following steps:
and step S300, acquiring target decoding results of two adjacent iterative decoding.
In step S310, if the check values of the target decoding results of two adjacent iterative decoding are the same, it is determined that a preset condition is satisfied.
In this embodiment, as a specific checking method, CRC32 checking may be adopted.
For example, the target decoding result of the ith iteration decoding isWhere N is the number of data pairs before encoding, i.e., the number of symbols, andcalculating CRC32 yields CRCi. The target decoding result of the i +1 th iteration decoding isBy usingAnd calculating CRCi + 1, and if the CRCi is equal to the CRCi +1, the decoding is exited in advance. To reduce the error rate of the CRC32 calculation, the calculation polynomial for CRC32 is quantized to:
where x is the position of the data, for example, the data sequence of the target decoding result is: 111111111010110100001100001101011, CRC32 for the data sequence is calculated as:。
according to the method provided by the embodiment of the invention, the iterative decoding is quitted in advance under the condition that the check values of the target decoding results of the two adjacent iterative decoding are consistent, so that the iteration times of the iterative decoding are greatly reduced, the iterative decoding time is saved, the throughput of the iterative decoding is increased, and the time delay of the iterative decoding is effectively reduced.
In this embodiment, according to actual needs, in order to simplify the processing, another way of determining that the preset condition is satisfied is provided in the embodiment of the present invention, please refer to fig. 16, where fig. 16 is a schematic flow chart of a method for determining that the preset condition is satisfied provided in the embodiment of the present invention, the method includes the following steps:
in step S400, if the iteration count of the iterative decoding reaches a preset count, it is determined that a preset condition is satisfied.
In this embodiment, the preset number of times may be set according to a specific application scenario.
In order to perform the corresponding steps of the data access method in the above-described embodiment and various possible embodiments, an implementation of the turbo decoding apparatus 100 is given below. Referring to fig. 17, fig. 17 is a block diagram illustrating a turbo decoding apparatus 100 according to an embodiment of the invention. It should be noted that the basic principle and the technical effect of the turbo decoding apparatus 100 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no reference is made to this embodiment.
the turbo decoding apparatus 100 includes an obtaining module 110, a determining module 120, and a decoding module 130.
The obtaining module 110 is configured to obtain data to be decoded and the number of the component decoders.
The determining module 120 is configured to determine the target number of the target component decoders allocated to the sub-decoders according to the number, the length of the data to be decoded, and a preset interleaving rule.
As a specific embodiment, the determining module 120 is specifically configured to: determining the number as a number to be selected; dividing the data to be decoded into a plurality of data segments according to the number to be selected and the length of the data to be decoded; interweaving each data segment in the multiple data segments according to a preset interweaving rule to obtain a memory address sequence after each data segment is interwoven; if no access conflict exists between every two of the plurality of storage address sequences, taking the number to be selected as the target number; if access conflict exists between any two storage addresses in the plurality of storage address sequences, the number to be selected is decreased progressively, the decreased number to be selected is used for replacing the number to be selected, the data to be decoded is repeatedly segmented, and each data segment is interleaved according to a preset interleaving rule until the target number is determined.
And the decoding module 130 is configured to input the data to be decoded into the sub-decoder to perform iterative decoding until a preset condition is met, so as to obtain decoded data.
As an embodiment, the decoding module 130 is specifically configured to: inputting data to be decoded and preset prior probability data into a sub-decoder for iterative decoding to obtain external information data and posterior probability data; and de-interleaving the external information data, replacing the preset prior probability data with the de-interleaved result, and then performing next iterative decoding until a preset condition is met.
As a specific implementation manner, each iterative decoding includes a first decoding and a second decoding, the data to be decoded includes original data and first check data and second check data corresponding to the original data, the decoding module 130 is configured to input the data to be decoded and preset prior probability data to the sub-decoder for iterative decoding, and when obtaining extrinsic information data and posterior probability data, the decoding module is specifically configured to: inputting the original data, the first check data and the preset prior probability data into a sub-decoder for first decoding to obtain intermediate external information data and intermediate posterior probability data; de-interleaving the intermediate external information data to obtain intermediate prior probability data; and interleaving the original data according to a preset interleaving rule, and inputting the interleaved original data, the second check data and the intermediate prior probability data into the sub-decoder for second decoding to obtain the extrinsic information data and the posterior probability data.
As a specific implementation manner, the decoding module 130 is specifically configured to, when the original data, the first check data, and the preset prior probability data are input to the sub-decoder for the first decoding to obtain the intermediate extrinsic information data and the intermediate posterior probability data: dividing the original data, the first check data and the preset prior probability data into a plurality of original data segments, a plurality of first check data segments and a plurality of prior probability data segments according to the target number; inputting each original data segment and each corresponding first check data segment and each prior probability data segment into each target component decoder for first decoding to obtain a middle extrinsic information data segment and a middle posterior probability data segment output by each target component decoder; combining a plurality of intermediate extrinsic information data segments output by all target component decoders to obtain intermediate extrinsic information data; and combining a plurality of intermediate posterior probability data segments output by all the target component decoders to obtain intermediate posterior probability data.
As a specific implementation manner, each target component decoder includes a preset number of processing windows, and each original data segment, first check data segment, and prior probability data segment respectively includes a preset number of original data units, a preset number of first check data units, and a preset number of prior probability data units; a processing window is used for processing an original data unit and corresponding first check data unit and prior probability data unit, and the decoding module 130 is specifically configured to, when being used for inputting each original data segment and corresponding first check data segment and each prior probability data segment into each target component decoder for first decoding to obtain a middle extrinsic information data segment and a middle posterior probability data segment output by each target component decoder: calculating a middle external information data unit and a middle posterior probability data unit corresponding to each processing window of each target component decoder according to an original data unit, a first check data unit and a prior probability data unit corresponding to each processing window of each target component decoder; combining a plurality of intermediate extrinsic information data units corresponding to all processing windows of each target component decoder to obtain intermediate extrinsic information data segments corresponding to each target component decoder; and combining the plurality of intermediate posterior probability data units corresponding to all the processing windows of each target component decoder to obtain the intermediate posterior probability data segment corresponding to each target component decoder.
As a specific embodiment, the iterative decoding includes a plurality of times, the target component decoder includes 1 st to mth target component decoders, each target component decoder includes 1 st to qth processing windows, where M and Q are positive integers; the decoding module 130 is specifically configured to, when calculating the intermediate extrinsic information data unit and the intermediate posterior probability data unit corresponding to each processing window of each target component decoder according to the original data unit, the first check data unit, and the prior probability data unit corresponding to each processing window of each target component decoder, and when calculating the intermediate extrinsic information data unit and the intermediate posterior probability data unit corresponding to the Y processing window of the X-th target component decoder for the L-th iterative decoding, specifically: performing first recursion according to the first initial value and the state transition metric value to obtain a first metric value, wherein when L =1 and Y =1, the first initial value is a first preset value; when L ≠ 1, and Y =1, and X =1, then the first initial value is a first metric value of a Q-th processing window of the mth target component decoder at the time of the L-1-th iterative decoding; when L ≠ 1, Y =1, and X ≠ 1, the first initial value is a first metric value of a Q-th processing window of the X-1-th target component decoder during the L-1-th iterative decoding; when Y is not equal to 1, the first initial value is a first metric value of a Y-1 processing window of the Xth target component decoder during the Lth iterative decoding; performing second recursion according to a second initial value and the state transition metric value to obtain a second metric value, wherein when L =1 and Y = Q, the second initial value is a second preset value; when L ≠ 1, and Y = Q, and X = M, then the second initial value is the second metric value of the 1 st processing window of the 1 st target component decoder at the time of the L-1 st iterative decoding; when L ≠ 1, and Y = Q, and X ≠ M, then the second initial value is the second metric value of the 1 st processing window of the X +1 st target component decoder at the time of the L-1 st iterative decoding; when L =1 and Y ≠ Q, the second initial value is a second preset value; when L is not equal to 1 and Y is not equal to Q, the second initial value is a second metric value of a Y +1 processing window of the Xth target component decoder during the L-1 th iterative decoding; obtaining an intermediate external information data unit corresponding to the target processing window according to the first metric value, the second metric value and the state transition metric value; and obtaining a middle posterior probability data unit corresponding to the target processing window according to the channel reliability, the target original data unit, the middle external information data unit and the target prior probability data unit.
As an embodiment, the decoding module 130 is further configured to: de-interleaving the posterior probability data to obtain a plurality of decision values, wherein each decision value corresponds to a decoding result; if the plurality of decision values are all smaller than or equal to 0, taking a preset decoding result as a target decoding result of iterative decoding; and if at least one decision value in the decision values is larger than 0, taking the decoding result corresponding to the maximum decision value as a target decoding result of the iterative decoding.
As an embodiment, the decoding module 130 is further configured to: obtaining target decoding results of two adjacent iterative decoding; and if the check values of the target decoding results of the two adjacent iterative decoding are the same, judging that the preset condition is met.
As an embodiment, the decoding module 130 is further configured to: and if the iteration times of the iterative decoding reach the preset times, judging that the preset condition is met.
Referring to fig. 18, fig. 18 is a block schematic diagram of a decoding apparatus 10 according to an embodiment of the present invention, where the decoding apparatus 10 includes a decoding controller 11, a memory 12, a bus 13, a communication interface 14, and a sub-decoder 15, and the sub-decoder 15 includes a plurality of component decoders therein. The decoding controller 11 and the memory 12 are connected by a bus 13, the decoding controller 11 communicates with an external device by a communication interface 14, the decoding controller 11 is connected with the sub-decoder 15 by the bus 13, and the decoding controller 11 is used for controlling each component decoder in the sub-decoder 15 to work in coordination.
The decoding controller 11 may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method may be performed by decoding instructions in the form of hardware integrated logic circuits or software in the controller 11. The decoding controller 11 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The memory 12 is used for storing a program, for example, the turbo decoding apparatus 100 in the embodiment of the present invention, the turbo decoding apparatus 100 includes at least one software functional module which can be stored in the memory 12 in a form of software or firmware (firmware), the decoding controller 11 executes the program after receiving an execution instruction to implement the steps S100 and S110 and the substeps S1101 to S1105 in fig. 1 and 2 in the turbo decoding method in the embodiment of the present invention, and the component decoder in the sub-decoder executes the program after receiving the execution instruction to implement the steps S120 and the substeps S1201 to S1202 in fig. 1 and 2 and the steps and substeps in fig. 6, 8, 10, 14, 15 and 16 in the decoding method in the turbo embodiment of the present invention.
The Memory 12 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory). Alternatively, the memory 12 may be a storage device built in the decode controller 11, or may be a storage device independent of the decode controller 11.
The bus 13 may be an ISA bus, a PCI bus, an EISA bus, or the like. Fig. 18 is represented by only one double-headed arrow, but does not represent only one bus or one type of bus.
An embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a decoding device, implements a turbo decoding method as described above.
In summary, embodiments of the present invention provide a turbo decoding method, apparatus, decoding device, and storage medium, which are applied to a decoding device, where the decoding device includes a sub-decoder and a plurality of component decoders, and the method includes: acquiring data to be decoded and the number of a plurality of component decoders; determining the number of targets of a target component decoder distributed for the sub-decoder according to the number, the length of the data to be decoded and a preset interleaving rule; and inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is met, and obtaining the decoded data. Compared with the prior art, the embodiment of the invention determines the target number of the target component decoders which can be used in iterative decoding according to the number of the component decoders, the length of the data to be decoded and the preset interleaving rule, and then inputs the data to be decoded into the sub-decoders for iterative decoding until the preset condition is met to obtain the decoded data, thereby realizing the determination of the required target component decoders aiming at different interleaving rules adopted in decoding of different code types, considering different code types of various different protocols, automatically adapting to the requirements of the code types and enhancing the compatibility and flexibility of turbo decoding.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (13)
1. A turbo decoding method applied to a decoding apparatus including a sub-decoder and a plurality of component decoders, the method comprising:
acquiring data to be decoded and the number of the component decoders;
determining the target number of target component decoders distributed for the sub-decoders according to the number, the length of the data to be decoded and a preset interleaving rule;
and inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is met, and obtaining the decoded data.
2. The turbo decoding method of claim 1, wherein the determining the target number of the target component decoders allocated to the sub-decoders according to the number, the length of the data to be decoded, and a preset interleaving rule comprises:
determining the number as a number to be selected;
dividing the data to be decoded into a plurality of data segments according to the number to be selected and the length of the data to be decoded;
interweaving each data segment in the plurality of data segments according to the preset interweaving rule to obtain a memory address sequence after each data segment is interwoven;
if no access conflict exists between every two of the storage address sequences, taking the number to be selected as the target number;
if access conflict exists between any two storage addresses in the plurality of storage address sequences, the number to be selected is decreased progressively, the decreased number to be selected is used for replacing the number to be selected, the segmentation of the data to be decoded is repeatedly carried out, and each data segment is interleaved according to the preset interleaving rule until the target number is determined.
3. The turbo decoding method of claim 1, wherein the step of inputting the data to be decoded into the sub-decoder for iterative decoding until a preset condition is satisfied to obtain decoded data comprises:
inputting the data to be decoded and preset prior probability data into the sub-decoder for iterative decoding to obtain external information data and posterior probability data;
and de-interleaving the external information data, replacing the preset prior probability data with a de-interleaved result, and then performing next iterative decoding until the preset condition is met.
4. The turbo decoding method of claim 3, wherein each iterative decoding includes a first decoding and a second decoding, the data to be decoded includes original data and first parity data and second parity data corresponding to the original data, and the step of inputting the data to be decoded and the predetermined prior probability data to the sub-decoder for iterative decoding to obtain extrinsic information data and posterior probability data includes:
inputting the original data, the first check data and the preset prior probability data into the sub-decoder for first decoding to obtain intermediate external information data and intermediate posterior probability data;
de-interleaving the intermediate external information data to obtain intermediate prior probability data;
and interleaving the original data according to the preset interleaving rule, and inputting the interleaved original data, the second check data and the intermediate prior probability data into the sub-decoder for second decoding to obtain the extrinsic information data and the posterior probability data.
5. The turbo decoding method of claim 4, wherein the step of inputting the original data, the first parity data and the predetermined a priori probability data to the sub-decoder for a first decoding to obtain intermediate extrinsic information data and intermediate a posteriori probability data comprises:
dividing the original data, the first check data and the preset prior probability data into a plurality of original data segments, a plurality of first check data segments and a plurality of prior probability data segments according to the target number;
inputting each original data segment, each corresponding first check data segment and each prior probability data segment into each target component decoder for first decoding to obtain a middle extrinsic information data segment and a middle posterior probability data segment output by each target component decoder;
combining a plurality of intermediate extrinsic information data segments output by all the target component decoders to obtain intermediate extrinsic information data;
and combining a plurality of intermediate posterior probability data segments output by all the target component decoders to obtain the intermediate posterior probability data.
6. The turbo decoding method of claim 5, wherein each of the target component decoders includes a predetermined number of processing windows, and each of the original data segment, the first parity data segment, and the prior probability data segment includes the predetermined number of original data units, the predetermined number of first parity data units, and the predetermined number of prior probability data units, respectively; one of the processing windows is used for processing one original data unit and a corresponding first check data unit and a prior probability data unit;
the step of inputting each original data segment and each corresponding first check data segment and each prior probability data segment into each target component decoder for first decoding to obtain a middle extrinsic information data segment and a middle posterior probability data segment output by each target component decoder comprises:
calculating a middle extrinsic information data unit and a middle posterior probability data unit corresponding to each processing window of each target component decoder according to the original data unit, the first check data unit and the prior probability data unit corresponding to each processing window of each target component decoder;
combining a plurality of intermediate extrinsic information data units corresponding to all processing windows of each target component decoder to obtain intermediate extrinsic information data segments corresponding to each target component decoder;
and combining a plurality of intermediate posterior probability data units corresponding to all processing windows of each target component decoder to obtain the intermediate posterior probability data segment corresponding to each target component decoder.
7. The turbo decoding method of claim 6, wherein the iterative decoding includes a plurality of times, the target component decoders include 1 st to mth target component decoders, each of the target component decoders includes 1 st to qth processing windows, wherein M and Q are positive integers;
the step of calculating the intermediate extrinsic information data unit and the intermediate posterior probability data unit corresponding to each processing window of each target component decoder according to the original data unit, the first verification data unit and the prior probability data unit corresponding to each processing window of each target component decoder comprises:
the step of calculating the middle extrinsic information data unit and the middle posterior probability data unit corresponding to the Y processing window of the Xth target component decoder of the L-th iterative decoding comprises the following steps:
respectively taking the original data unit, the first check data unit and the prior probability data unit corresponding to the Y processing window of the Xth target component decoder of the L-th iterative decoding as a target original data unit, a target first check data unit and a target prior probability data unit, wherein L is a positive integer, X is more than or equal to 1 and less than or equal to M, and Y is more than or equal to 1 and less than or equal to Q;
obtaining a state transition metric value according to the target original data unit, the target first verification data unit and the target prior probability data unit;
performing first recursion according to a first initial value and the state transition metric value to obtain a first metric value, wherein when the L =1 and the Y =1, the first initial value is a first preset value; when the L ≠ 1, and the Y =1, and the X =1, then the first initial value is a first metric value of a qth processing window of the mth target component decoder at the time of the L-1-th iterative decoding; when the L ≠ 1, and the Y =1, and the X ≠ 1, then the first initial value is a first metric value of a qth processing window of the X-1-th target component decoder at the time of the L-1-th iterative decoding; when the Y is not equal to 1, the first initial value is a first metric value of a Y-1 processing window of an Xth target component decoder during the Lth iterative decoding;
performing second recursion according to a second initial value and the state transition metric value to obtain a second metric value, wherein when the L =1 and the Y = Q, the second initial value is a second preset value; when the L ≠ 1, and the Y = the Q, and the X = the M, then the second initial value is a second metric value of a 1 st processing window of a 1 st target component decoder at the L-1 st iterative decoding; when the L ≠ 1, and the Y = the Q, and the X ≠ the M, then the second initial value is a second metric value of a 1 st processing window of an X +1 th target component decoder at the time of the L-1 st iterative decoding; when the L =1 and the Y ≠ Q, the second initial value is the second preset value; when the L is not equal to 1 and the Y is not equal to the Q, the second initial value is a second metric value of a Y +1 processing window of an Xth target component decoder during the L-1 th iterative decoding;
obtaining the intermediate external information data unit corresponding to the target processing window according to the first metric value, the second metric value and the state transition metric value;
and obtaining the intermediate posterior probability data unit corresponding to the target processing window according to the channel reliability, the target original data unit, the intermediate external information data unit and the target prior probability data unit.
8. The turbo decoding method of claim 3, wherein the step of inputting the data to be decoded and the predetermined prior probability data to the sub-decoder for iterative decoding to obtain extrinsic information data and a posterior probability data further comprises:
de-interleaving the posterior probability data to obtain a plurality of decision values, wherein each decision value corresponds to a decoding result;
if the plurality of decision values are all smaller than or equal to 0, taking a preset decoding result as a target decoding result of the iterative decoding;
and if at least one decision value in the decision values is larger than 0, taking a decoding result corresponding to the largest decision value as a target decoding result of the iterative decoding.
9. The turbo decoding method of claim 8, further comprising:
obtaining target decoding results of two adjacent iterative decoding;
and if the check values of the target decoding results of the two adjacent iterative decoding are the same, judging that the preset condition is met.
10. The turbo decoding method of claim 3, further comprising:
and if the iteration times of the iterative decoding reach preset times, judging that the preset condition is met.
11. A turbo decoding apparatus applied to a decoding device including a sub-decoder and a plurality of component decoders, the apparatus comprising:
the acquisition module is used for acquiring data to be decoded and the number of the component decoders;
a determining module, configured to determine, according to the number, the length of the data to be decoded, and a preset interleaving rule, a target number of target component decoders allocated to the sub-decoder;
and the decoding module is used for inputting the data to be decoded into the sub-decoder to carry out iterative decoding until a preset condition is met, so as to obtain decoded data.
12. A decoding device characterized in that it implements a turbo decoding method according to any one of claims 1-10 when executing said computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a decoding device, carries out a turbo decoding method as set forth in any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110658186.XA CN113258940B (en) | 2021-06-15 | 2021-06-15 | turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110658186.XA CN113258940B (en) | 2021-06-15 | 2021-06-15 | turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113258940A true CN113258940A (en) | 2021-08-13 |
CN113258940B CN113258940B (en) | 2021-10-08 |
Family
ID=77188021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110658186.XA Active CN113258940B (en) | 2021-06-15 | 2021-06-15 | turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113258940B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113890546A (en) * | 2021-12-06 | 2022-01-04 | 成都星联芯通科技有限公司 | Interleaver configuration method, interleaver configuration device, electronic equipment and computer-readable storage medium |
CN113992213A (en) * | 2021-10-28 | 2022-01-28 | 成都星联芯通科技有限公司 | Double-path parallel decoding storage equipment and method |
CN113992212A (en) * | 2021-12-27 | 2022-01-28 | 成都星联芯通科技有限公司 | Data interleaving method and FPGA |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1564493A (en) * | 2004-03-18 | 2005-01-12 | 上海交通大学 | Method of orthogonal FDM for modulatng sub-carrier separation and sub-band cross arrangement |
CN101777924A (en) * | 2010-01-11 | 2010-07-14 | 新邮通信设备有限公司 | Method and device for decoding Turbo codes |
CN102064838A (en) * | 2010-12-07 | 2011-05-18 | 西安电子科技大学 | Novel conflict-free interleaver-based low delay parallel Turbo decoding method |
CN102611464A (en) * | 2012-03-30 | 2012-07-25 | 电子科技大学 | Turbo decoder based on external information parallel update |
CN103595424A (en) * | 2012-08-15 | 2014-02-19 | 重庆重邮信科通信技术有限公司 | Component decoding method, decoder, Turbo decoding method and Turbo decoding device |
CN103916142A (en) * | 2013-01-04 | 2014-07-09 | 联想(北京)有限公司 | Channel decoder and decoding method |
CN104092470A (en) * | 2014-07-25 | 2014-10-08 | 中国人民解放军国防科学技术大学 | Turbo code coding device and method |
US20150236723A1 (en) * | 2014-02-19 | 2015-08-20 | Eric Morgan Dowling | Parallel VLSI architectures for constrained turbo block convolutional decoding |
US20180062789A1 (en) * | 2016-08-26 | 2018-03-01 | National Chiao Tung University | Method and device for de-puncturing turbo-coded digital data, and turbo decoder system |
CN110299921A (en) * | 2019-06-11 | 2019-10-01 | 东南大学 | A kind of Turbo code deep learning interpretation method of model-driven |
CN111130572A (en) * | 2020-01-06 | 2020-05-08 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Turbo code quick realizing method |
CN112436843A (en) * | 2020-11-27 | 2021-03-02 | 西安空间无线电技术研究所 | Design method of Turbo code channel outer interleaver |
-
2021
- 2021-06-15 CN CN202110658186.XA patent/CN113258940B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1564493A (en) * | 2004-03-18 | 2005-01-12 | 上海交通大学 | Method of orthogonal FDM for modulatng sub-carrier separation and sub-band cross arrangement |
CN101777924A (en) * | 2010-01-11 | 2010-07-14 | 新邮通信设备有限公司 | Method and device for decoding Turbo codes |
CN102064838A (en) * | 2010-12-07 | 2011-05-18 | 西安电子科技大学 | Novel conflict-free interleaver-based low delay parallel Turbo decoding method |
CN102611464A (en) * | 2012-03-30 | 2012-07-25 | 电子科技大学 | Turbo decoder based on external information parallel update |
CN103595424A (en) * | 2012-08-15 | 2014-02-19 | 重庆重邮信科通信技术有限公司 | Component decoding method, decoder, Turbo decoding method and Turbo decoding device |
CN103916142A (en) * | 2013-01-04 | 2014-07-09 | 联想(北京)有限公司 | Channel decoder and decoding method |
US20150236723A1 (en) * | 2014-02-19 | 2015-08-20 | Eric Morgan Dowling | Parallel VLSI architectures for constrained turbo block convolutional decoding |
CN104092470A (en) * | 2014-07-25 | 2014-10-08 | 中国人民解放军国防科学技术大学 | Turbo code coding device and method |
US20180062789A1 (en) * | 2016-08-26 | 2018-03-01 | National Chiao Tung University | Method and device for de-puncturing turbo-coded digital data, and turbo decoder system |
CN110299921A (en) * | 2019-06-11 | 2019-10-01 | 东南大学 | A kind of Turbo code deep learning interpretation method of model-driven |
CN111130572A (en) * | 2020-01-06 | 2020-05-08 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Turbo code quick realizing method |
CN112436843A (en) * | 2020-11-27 | 2021-03-02 | 西安空间无线电技术研究所 | Design method of Turbo code channel outer interleaver |
Non-Patent Citations (4)
Title |
---|
"Improving the Structure of Multiple Dimension Turbo Codes Using Multiple Identical Component Encoders ", 《WUHAN UNIVERSITY JOURNAL OF NATURAL SCIENCES》 * |
"NOVEL DECODING OF SQUARE QAM MODULATED MIMO SYSTEMS BASED ON TURBO MULTIUSER DETECTION ", 《JOURNAL OF ELECTRONICS(CHINA)》 * |
YANG YANG等: "AMP Dual-Turbo Iterative Detection and Decoding for LDPC Coded Multibeam MSC Uplink ", 《中国通信》 * |
李?等: "基于比特交织和调制分集的Turbo乘积编码调制系统性能 ", 《上海交通大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113992213A (en) * | 2021-10-28 | 2022-01-28 | 成都星联芯通科技有限公司 | Double-path parallel decoding storage equipment and method |
CN113992213B (en) * | 2021-10-28 | 2024-06-04 | 成都星联芯通科技有限公司 | Dual-path parallel decoding storage device and method |
CN113890546A (en) * | 2021-12-06 | 2022-01-04 | 成都星联芯通科技有限公司 | Interleaver configuration method, interleaver configuration device, electronic equipment and computer-readable storage medium |
CN113890546B (en) * | 2021-12-06 | 2022-03-04 | 成都星联芯通科技有限公司 | Interleaver configuration method, interleaver configuration device, electronic equipment and computer-readable storage medium |
CN113992212A (en) * | 2021-12-27 | 2022-01-28 | 成都星联芯通科技有限公司 | Data interleaving method and FPGA |
CN113992212B (en) * | 2021-12-27 | 2022-03-22 | 成都星联芯通科技有限公司 | Data interleaving method and FPGA |
Also Published As
Publication number | Publication date |
---|---|
CN113258940B (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113258940B (en) | turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium | |
JP3677257B2 (en) | Convolution decoding device | |
KR101323444B1 (en) | Iterative decoder | |
CN1168237C (en) | Component decoder and method thereof in mobile communication system | |
US6038696A (en) | Digital transmission system and method comprising a product code combined with a multidimensional modulation | |
CN104092470B (en) | A kind of Turbo code code translator and method | |
US20090172495A1 (en) | Methods and Apparatuses for Parallel Decoding and Data Processing of Turbo Codes | |
US20090067554A1 (en) | High throughput and low latency map decoder | |
US20130007568A1 (en) | Error correcting code decoding device, error correcting code decoding method and error correcting code decoding program | |
WO2007059489A2 (en) | Cascaded radix architecture for high-speed viterbi decoder | |
CN118041374A (en) | Turbo code decoding method and decoder based on improved sliding window | |
CN108134612A (en) | Correct the synchronous iterative decoding method with substituting the concatenated code of mistake | |
CN107565983B (en) | Turbo code decoding method and device | |
CN1129257C (en) | Maximum-likelihood decode method f serial backtracking and decoder using said method | |
JP2010130271A (en) | Decoder and decoding method | |
JP2003152556A (en) | Error-correcting and decoding device | |
CN113872615A (en) | Variable-length Turbo code decoder device | |
CN108880569B (en) | Rate compatible coding method based on feedback grouping Markov superposition coding | |
CN112332868A (en) | Turbo parallel decoding method based on DVB-RCS2 | |
US10116337B2 (en) | Decoding method for convolutionally coded signal | |
TWI650954B (en) | Decoding method for convolution code decoding device in communication system and related determining module | |
CN113765622B (en) | Branch metric initializing method, device, equipment and storage medium | |
Ahmed et al. | Viterbi algorithm performance analysis for different constraint length | |
CN113824452B (en) | Decoding method based on grid graph, component decoder and channel decoder | |
CN112953559B (en) | Polarization code decoding method based on frozen bit log-likelihood value correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211019 Address after: 801-406, Hongqiao Road, Binhu District, Wuxi City, Jiangsu Province, 214100 Patentee after: Wuxi Xinglian Xintong Technology Co.,Ltd. Address before: No.1, 7th floor, building 6, No.5, Xixin Avenue, high tech Zone, Chengdu, Sichuan 610000 Patentee before: Chengdu Xinglian Xintong Technology Co.,Ltd. Patentee before: Wuxi Xinglian Xintong Technology Co.,Ltd. |