CN110832783B - Multi-kernel polar code decoding - Google Patents

Multi-kernel polar code decoding Download PDF

Info

Publication number
CN110832783B
CN110832783B CN201780092919.XA CN201780092919A CN110832783B CN 110832783 B CN110832783 B CN 110832783B CN 201780092919 A CN201780092919 A CN 201780092919A CN 110832783 B CN110832783 B CN 110832783B
Authority
CN
China
Prior art keywords
statistics
decoding
partial sum
kernel
bit value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780092919.XA
Other languages
Chinese (zh)
Other versions
CN110832783A (en
Inventor
瓦莱里奥·比奥里奥
英格玛·兰德
弗雷德里克·格博瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN110832783A publication Critical patent/CN110832783A/en
Application granted granted Critical
Publication of CN110832783B publication Critical patent/CN110832783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes

Abstract

The present invention provides a process for sequentially decoding polarization codes. The process comprises the following steps: propagating a statistical value representing a preliminary estimate of a codeword bit received over a noisy channel through a plurality of decoding stages, the decoding stages including a plurality of kernel units representing different sizes of polar code kernels; determining a first decoding bit value based on output statistics of a core unit of a final decoding stage; propagate a first decoded bit value through a subset of the plurality of decoding stages and store a first partial sum determined from the propagated first decoded bit value in a first memory element in a memory. The process continues by determining a second decoded bit value based on the stored first partial sum and at least a portion of the propagated statistics; propagating the second decoded bit value through a subset of the plurality of decoding stages and storing a second partial sum determined from the propagated second decoded bit value in the memory, wherein the stored second partial sum consumes memory space gained by releasing the first memory element.

Description

Multi-kernel polar code decoding
Technical Field
The present disclosure relates to multi-kernel polarization codes. In particular, the present disclosure relates to a process of multi-kernel polar code decoding.
Background
Such as "channel polarization" in "IEEE INFORMATION THEORY journal (IEEE transport INFORMATION for the same)" by arika in 7 months 2009: method for constructing codes of achievable capacity for binary input memoryless symmetric channels (Channel polarization: a method for structuring capacity-accessing codes for systematic bits-input channels), polarization codes defining a new type of forward error correction code based on a matrix called "kernel" of polarization codes
Figure BDA0002355727540000011
Kronecker product of
Figure BDA0002355727540000012
The polarization effect makes the decodability of bits located at certain positions of the coded vector (after coding and transmission) more reliable than other bits.
This polarization effect is exploited by using more reliable positions to carry information bits, while "freezing" bits in less reliable positions, so that the decoder perceives the value of the "frozen bits" upon decoding. From a given length-N coding vector u comprising K information bits and N-K "frozen bits", a length-N codeword x can be calculated as x-u-GNWherein
Figure BDA0002355727540000013
Is a transformation matrix of the polar code. Therefore, when the number K of information bits can be freely selected, the possible code length is limited to N-2n
To reduce the problem of code length, Multi-core Polar Codes constructed based on cores of different sizes have been proposed (see the book of preprints 12 months 2016 (arXiv): 1612.06099, Gabry et al, "Multi-Kernel Construction of Polar Codes") due to polarization effects on type I
Figure BDA0002355727540000014
(where s is the number of kernels) the transformation matrix is still valid, so that the code length N ═ N1·…·ns. However, as in the 2016 month 12 preprint literature library (arXiv): 1612.06099, Multi-core Polar code resolution using the recursive successive deletion (SC) algorithm, as described in Multi-core Construction of Polar Codes (Multi-Kernel Construction of Polar Codes) by Gabry et al, may require a large amount of memory.
Disclosure of Invention
The following describes an improved process for sequential decoding of multi-core polar codes that reduces memory requirements from o (nlogn) to o (n).
According to a first aspect of the invention, a decoder for decoding a multi-kernel polar code is provided. The decoder is configured to determine in order decoded bit values by propagating statistics representing preliminary estimates of codeword bits received over a noisy channel through a plurality of decoding stages, and the decoding stages include a plurality of kernel units representing different sizes of polar code kernels. Each core unit of the plurality of core units is configured to determine an output statistical value based on one or more input statistical values, where the output statistical value of a core unit of a previous decoding stage serves as an input statistical value of a core unit of a next decoding stage.
The decoder is further configured to determine a first decoded bit value based on output statistics of a core unit of a final decoding stage; propagating the first decoded bit value through a subset of the plurality of decoding stages and storing a first partial sum determined from the propagated first decoded bit value in a first memory element of a memory. The decoder is further for determining a second decoded bit value based on the stored first partial sum and at least a portion of the propagated statistics; propagating the second decoded bit value through a subset of the plurality of decoding stages and storing a second partial sum determined from the propagated second decoded bit value in the memory, wherein the stored second partial sum consumes storage space gained by freeing the first memory element.
By using the storage space obtained by releasing the first memory element, the total amount of memory required for decoding may remain below a predetermined value and constant after the first decoded bit value has been determined. The release may be performed when the statistics/partial sums are no longer needed for the remaining steps of the decoding process to be performed and before a new statistics/partial sum determines that the memory space obtained via the release is needed.
In other words, by selecting the next decoding step from a set of possible decoding steps, the decoding process can perform the decoding steps in order to minimize the amount of memory required, thus prioritizing statistics and partial sums needed to decode the next bit value and delaying statistics and partial sums not needed to decode the next bit value until the next bit value is decoded. Thereby, a "decoding path" bypassing the core unit of the decoding stage may be generated/followed, which ensures a "just-in-time" calculation of statistics and partial sums and ensures that the bit values are decoded continuously.
In this regard, it should be noted that the term "decoding phase" as used throughout the specification and claims refers specifically to hardware, software, or a combination of hardware and software that implements a plurality of kernel units of size p, where each kernel unit corresponds to a kernel TpAnd updates the statistical values and/or partial sums. If the statistics are updated, the kernel unit may take the p statistics and the p partial sums as the p statistics to be input and output. If the partial sums are updated, the core unit may take the p partial sums as the p partial sums for input and output. Furthermore, the term "memory element" as used throughout the specification and claims refers specifically to registers and addresses of physical memory devices, such as random-access memory (RAM). In particular, the size of the memory element may correspond to the size of the format in which the statistics/partial sums are stored.
In a first possible implementation form of the decoder according to the first aspect, the decoder is configured to store the output statistics of the kernel unit involved in propagating the statistics in a memory element of the memory through the plurality of decoding stages. Further, the decoder is configured to replace a first output statistic based on the one or more input statistics with a second output statistic based on the one or more input statistics.
Thus, statistics that are no longer needed in the decoding process can be overwritten, reducing the overall requirement for storage space compared to retaining/allocating other memory elements.
In a second possible implementation form of the decoder according to the first aspect, the statistics are propagated to determine that the number of core units involved in the first decoded bit value decreases gradually from the previous decoding stage to the next decoding stage.
Thus, the core units may be operated continuously, thereby further reducing hardware requirements, since the computations of different core units may be performed continuously using hardware.
In a third possible implementation form of the decoder according to the first aspect, the different decoding stages comprise different numbers of core units, wherein the core units of the different decoding stages differ in the number of input statistics.
For example, a first decoding stage may include e kernel units, where each kernel unit of the first decoding stage is configured to receive f input statistics, while a second decoding stage may include g kernel units, where each kernel unit of the second decoding stage is configured to receive h input statistics, and e · f ═ g · h. Each core in the transformation matrix may have a core unit assigned thereto, where the number of core unit input values corresponds to the size of the core to which the core unit is assigned.
This structure may help to apply/adapt the decoding process to different multi-kernel code.
In a fourth possible implementation form of the decoder according to the first aspect, the number of input statistics of the kernel unit may be two, three, five, or more.
For example, each core unit of the first decoding stage may receive two input statistics, while each core unit of the second decoding stage may receive three input statistics, or vice versa.
In a fifth possible implementation form of the decoder according to the first aspect, the number of memory elements dedicated to storing the partial sum of the final decoding stage is smaller than the number of memory elements dedicated to storing the partial sum of the penultimate decoding stage, wherein the ratio of the numbers is equal to the number of input values of the kernel units of the penultimate decoding stage.
For example, if the penultimate decoding stage includes a kernel unit of size three and the memory includes two memory elements dedicated to storing the partial sum of the final decoding stage, there may be six memory elements dedicated to storing the partial sum of the penultimate decoding stage.
In a sixth possible implementation form of the decoder according to the first aspect as such is the decoder being configured to overwrite the first partial sum into the second partial sum in order.
Thus, the partial sum that is no longer needed in the decoding process can be overwritten, reducing the overall requirement for storage space compared to reserving/allocating other memory elements. In particular, the memory may be adapted to provide storage space for only any of the first and second partial sum/statistic values.
In a seventh possible implementation form of the decoder according to the first aspect, the statistical value is one of a log-likelihood ratio, a likelihood ratio or a likelihood.
According to a second aspect of the invention, there is provided a method of sequentially decoding a polar code. The method includes propagating statistics representing preliminary estimates of codeword bits received over a noisy channel through a plurality of decoding stages, the decoding stages including a plurality of kernel units representing different sizes of polar code kernels. Each core unit is configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a core unit of a previous decoding stage serves as an input statistical value of a core unit of a next decoding stage.
The method further comprises the following steps: determining a first decoding bit value based on output statistics of a core unit of a final decoding stage; propagating the first decoded bit value through a subset of the plurality of decoding stages and storing a first partial sum determined from the propagated first decoded bit value in a first memory element of a memory of a decoder.
The method further comprises the following steps: determining a second decoded bit value based on the stored first partial sum and at least a portion of the propagated statistics; propagating the second decoded bit value through a subset of the plurality of decoding stages and storing a second partial sum determined from the propagated second decoded bit value in the memory, wherein the stored second partial sum consumes storage space obtained by freeing the first memory element.
The method may comprise the decoder of the first aspect and achieve the same/similar advantages. Thus, unless otherwise specified or not applicable, any disclosure herein with respect to a decoder also relates to the method, and vice versa.
According to a second aspect, in a first possible implementation form of the method, the method comprises storing, by the plurality of decoding stages, the output statistics of the kernel units involved in propagating the statistics in a memory element of the memory; replacing a first output statistic based on the one or more input statistics with a second output statistic based on the one or more input statistics.
Thus, as indicated above, the statistics no longer needed by the decoding process can be overwritten, reducing the overall requirements on storage space compared to reserving/allocating other memory units.
In a second possible implementation form of the method according to the second aspect, the statistics are propagated to determine that the number of core cells to which the first decoded bit value relates decreases gradually from the previous stage to the subsequent stage.
Thus, as indicated above, the core units may be operated continuously, further reducing hardware requirements, since the computations of different core units may be performed continuously using hardware.
In a third possible implementation form of the method according to the second aspect, the different decoding stages comprise different numbers of core units, wherein the core units of the different decoding stages differ in the number of input statistics.
As noted above, this structure may help to apply/adapt the decoding process to different multi-kernel code.
In a fourth possible implementation form of the method according to the second aspect, the number of input statistics of the kernel unit may be two, three, five or more.
In a fifth possible implementation form of the method according to the second aspect, the number of memory elements dedicated to storing the partial sum of the final decoding stage is smaller than the number of memory elements dedicated to storing the partial sum of the penultimate decoding stage, wherein the ratio of the numbers is equal to the number of input values of the kernel units of the penultimate decoding stage.
In a sixth possible implementation form of the method according to the second aspect, the method comprises rewriting the first partial sum into the second partial sum in sequence.
Thus, as indicated above, the partial sums which are no longer needed in the decoding process can be overwritten, reducing the overall requirement for storage space compared to reserving/allocating other memory elements storing the second partial sums. In particular, the memory may be adapted to provide storage space for only any of the first and second partial sum/statistic values.
In a seventh possible implementation form of the method according to the second aspect, the statistical value is one of a log-likelihood ratio, a likelihood ratio or a likelihood.
Further, according to a third aspect of the disclosure, some or all of the steps of the method may be performed by a processor according to instructions persistently stored on a tangible machine-readable medium.
Drawings
FIG. 1 shows a block diagram depicting a general-purpose digital communication system in which elements of the present invention may be implemented;
FIG. 2 shows a flow chart of the steps of an in-order decoding process;
FIG. 3 shows a block diagram of a core unit 30;
FIG. 4 depicts a method for preventing the formation of cracks by
Figure BDA0002355727540000041
A transformation matrix (a), a tanner graph (b) and a memory structure (c) of a decoding process of the managed polar code;
FIG. 5 depicts a method for providing a service by
Figure BDA0002355727540000042
A transformation matrix (a), a tanner graph (b) and a memory structure (c) of a decoding process of the managed polar code;
FIG. 6 depicts a circuit for a computer system
Figure BDA0002355727540000043
A transformation matrix (a), a tanner graph (b) and a memory structure (c) of a decoding process of the managed polar code.
Detailed Description
The following provides one non-limiting example of a sequential multi-core polar code decoding process. This example depicts the sequential decoding process of cores of sizes 2 and 3, resulting in a building block length of
Figure BDA0002355727540000044
Multiple kernel polarization codes. However, the sequential decoding process may also include other sizes of cores. Since conventional polar codes (only cores of the same size) are a subclass of multi-core polar codes, the proposed sequential decoding process can also be used to decode conventional polar codes.
Fig. 1 shows a block diagram depicting a general-purpose digital communication system 10 in which the proposed sequential decoding process can be implemented. The system 10 includes a transmitting side including an encoder 12 and a receiving side including a decoder 14. The input to the encoder 12 on the transmission side may be an encoded vector u of length N comprising K information bits and N-K "frozen bits", from which the decoder 12 may calculate the codeword x.
The codeword x may be forwarded to a modulator 16, and the modulator 16 may convert the codeword x into a modulated signal vector CH _ IN. The modulated signal vector CH _ IN may be transmitted to a demodulator 20 via a channel 18 (e.g., a wired or wireless channel). The channel output CH OUT differs from the channel input CH IN that the channel 18 is typically subjected to noise.
On the receive side, demodulator 20 may process channel output vector CH _ OUT. The demodulator 20 may generate statistics, such as log-likelihood ratios (LLRs), Likelihood Ratios (LRs), or likelihoods (ls), which indicate the probability that the channel output vector CH _ OUT corresponds to a particular bit sequence. The decoder 14 can use the redundancy in the codeword x in the sequential decoding process to decode the K information bits.
Multi-kernel polar codes can manage the encoding and decoding process. Thus, the encoding process of encoder 12 may be based on the transform matrix GNIs performed in parallel. The sequential decoding process set forth in decoder 14 may include the steps shown in fig. 2, which may be implemented by custom hardware (e.g., FPGA) or a processor.
In step 20, the statistics may be propagated through a plurality of decoding stages 40 to 42 (see fig. 4b, where the statistics are LLRs). Each decoding stage may include one or more equal sized multiple core units 30 (see fig. 3). Each core unit 30 may represent a transformation matrix GNThe kernel of (2).
As shown in fig. 3, the core unit 30 may be used to determine output statistics based on the input statistics. For example, a core unit of size p may update the LLRs and PS. If the LLRs are updated, the core unit 30 may take the p LLRs and the p PSs as input and output p LLRs, which may be used to decode different information bits. If the PS is updated, the kernel unit may take p PS's as inputs and p PS's as outputs.
In fig. 3, inputs for updating LLRs are received on the right side of core unit 30, while outputs are provided on the left side of core unit 30. For updating the PS, input is received on the left side of the kernel unit 30 and output is provided on the right side of the kernel unit 30. The operation performed by the input depends on the transformation matrix T defining the kernelpAnd the position of the information bit to be decoded.
By way of example, the update operations performed by the two kernels of sizes 2 and 3 are described as follows:
when used in the decoding process, by the kernel
Figure BDA0002355727540000051
The process implemented by the core unit of (1) can be described as follows: the LLR update function of the core unit combines two LLRs (L)0And L1) And two PS (x)0And x1) As input, and by
Figure BDA0002355727540000052
Two different LLRs are calculated as:
Figure BDA0002355727540000053
and
Figure BDA0002355727540000054
in a similar manner, the PS update function of the kernel unit will be two PS (u)0And u1) As input, and two different PS are calculated as:
Figure BDA0002355727540000055
and x1=u1
When used in the decoding process, by the kernel
Figure BDA0002355727540000056
The process implemented by the core unit of (a) may be described as follows: the LLR update function of the core unit will have three LLRs (L)0、L1And L2) And three PS: (x0、x1And x2) As input, and three different LLRs are computed as:
Figure BDA0002355727540000057
and
Figure BDA0002355727540000058
Figure BDA0002355727540000059
in a similar manner, the PS update function of the kernel unit will be three PS (x)0、x1And x2) As input, and three different PS are calculated:
Figure BDA00023557275400000510
and
Figure BDA00023557275400000511
as shown in fig. 4b, which depicts a signal represented by the form
Figure BDA00023557275400000512
The output statistics of the core unit 30 of the previous decoding stage 40 serve as the input statistics of the core unit 30 of the subsequent decoding stage 41.
In step 22 of FIG. 2, the first decoded bit value u is determined by output statistics based on the core unit 30 of the final decoding stage 42i(e.g., i ═ {0, 1, 2}) to continue the process.
In step 24, the first decoded bit value u is propagated through a subset of the plurality of decoding stages (e.g., decoding stage 42, or decoding stage 42 and decoding stage 41)iAs shown (e.g., FIG. 4c), will be based on the propagated first decoded bit value uiThe determined first Partial Sum (PS) is stored in the first memory element 43.
In step 26, the process includes determining a second decode based on the stored first partial sum and at least a portion of the propagated statisticsBit value ui(e.g., i ═ {3, 4, 5 }).
In step 28, the second decoded bit value u is propagated through a subset of the plurality of decoding stagesiAnd will be based on the propagated second decoded bit value uiThe determined second partial sum is stored in a memory, wherein the stored second partial sum consumes storage space obtained by freeing the first memory element.
As shown in fig. 4-6, which depict the transformation matrix, tanner graph and the different transformation matrix G12The LLRs and PS are stored in different memory structures. Specifically, LLRs may be stored as real numbers and PS may be stored as bits. As shown in fig. 4(c), 5(c) and 6(c), the memory structure depends on defining the transformation matrix
Figure BDA00023557275400000513
The order of the cores of (c).
For example, LLRs can be stored in s +1 real vectors of different sizes (where s is the transform matrix G)NNumber of kernels). The length of the first vector may always be 1 (as indicated by the leftmost square in the lower part of fig. 4(c), 5(c) and 6 (c)). The length of the ith vector may be given by the product of the sizes of the last i-1 kernels, i.e., the length of the ith vector may be given by ns-i+2·...·nsIt is given.
The PS can be stored in s binary matrices of different sizes (hereinafter, the width and height of the matrix can be referred to by the number of columns and rows, respectively). The width of the ith PS matrix may be given by the size of the (s-i +1) th kernel, i.e., the width of the (s-i +1) th PS matrix may be given by ns-i+1It is given. The height of the ith PS matrix may be given by the product of the sizes of the last i-1 kernels, i.e., the height of the ith PS matrix may be given by ns-i+2·...·nsIt is given. The size of the last matrix may not follow this rule, but its width may be reduced by one, i.e., the width of the last matrix may be n1-1。
Once the decoded bits are no longer needed in the remaining steps of the decoding process, they can be stored in a binary vector U of length N or immediately output.
Exemplary update rules for vectors and matrices of LLRs, PS and U are provided below. It starts with the update rule of the LLR vector, then the update rule of the binary vector U, and finally describes the update rule of the PS matrix. Recall that the update rule applies to each decoded bit. Further, the update may be performed by the core unit 30. First, the (s +1) th LLR vector (rightmost of fig. 4(c), fig. 5(c), and fig. 6 (c)) may be filled with N LLRs for the received symbol (symbol, s), while all other entries of the LLR vector: the PS matrix and U vector may be set to 0.
LLR updating
To bit uiThe LLR update stage is performed and i can be represented by a mixed-radix digital system based on kernel order. The mixed-radix numerical system is a non-standard azimuth numerical system in which the numerical basis depends on numerical position (measured in hours, minutes, and seconds in terms of time).
Radix of a digital system is used to build a transformation matrix GNThe size of the kernel of (a). For example, the following table shows the transformation matrix G shown in fig. 4(a), 5(a) and 6(a)12The transformation of (2):
Figure BDA0002355727540000061
the LLR updates in fig. 4(b) and (c), fig. 5(b) and (c) and fig. 6(b) and (c) are performed from right to left. The vectors may be updated with subsequent vectors starting from the s-th vector. In general, the (j +1) th vector sum kernel may be utilized
Figure BDA0002355727540000071
The jth vector is updated. The n from the (j +1) th LLR vector and from the jth PS matrix may be utilizeds-j+1And updating the jth vector one by each table entry. The LLR update rule may be selected according to the mixed-radix image of i.
If bit uiMust depend on the mixed-radix image i ═ b1 b2 ... bsDecoding is performed, then the LLR update rule
Figure BDA0002355727540000072
May be used to update the jth vector. To update the kth entry of the jth vector, n to be used by the update formulas-j+1One LLR may be selected as the entry (k-1) n of the (j +1) th vectors-j+1+1,...,kns-j+1
Similarly, updating n that the formula will uses-j+1PS can be selected as n of the k-th row in the j-th matrixs-j+1And (4) each table item. Since the s-th matrix has only n s-j+11 row, so the missing bit can be considered to be 0.
With the LLR update algorithm, s LLR vectors must be updated for each decoded bit. But because the mixed-radix expansion values of two consecutive digits differ to the right of the rightmost non-zero element position of the second digit, the vector represented by the left position of the rightmost non-zero entry does not have to be updated. Of course, this does not apply to the case where i is 0, in which case all vectors must be updated.
UiUpdating
If decoding bit UiNot frozen, its value can be obtained by hard decision on the only element of the single first LLR vector. Otherwise, if UiIs frozen, it may be set to 0.
PS update
The PS matrix can be updated in the order of increasing each column from left to right (see, fig. 4(b) and (c), fig. 5(b) and (c), and fig. 6(b) and (c)). When the last column of the PS matrix is filled, the next column of the PS matrix may be updated. The update may always start with a first PS matrix of width nsThe column vector of (2). Decoding bit UiCan be in column bsTo (3) copy.
When b iss=nsAt-1, the matrix is filled and column b of the second matrix may be updateds-1. If b isjn j1, the matrix is filled and the next one can be updatedOtherwise, the process terminates. When the jth matrix is filled, i.e. if bs-j+1n s-j+11, column b of the (j +1) th matrix can be updateds-j
In this case, each row of the jth matrix may be used to update a partial column b of the (j +1) th matrixs-j. In particular, there is a kernel with the kth row of the jth matrix as input
Figure BDA0002355727540000073
N of (a)s-j+1The PS update rule may be used to update column b of the (j +1) th matrixs-jLine (k-1) ns-j+1+1,...,kns-j+1. As an exception, for the last bit, no PS update step may be performed, since no new LLR update step is needed.
The proposed sequential SC decoding process allows to reduce the amount of memory necessary for decoding multi-core polar codes. Albeit for having N ═ N1·...·nsOf the transformation matrix
Figure BDA0002355727540000074
The recursive implementation of the SC decoder of multi-kernel polar codes of (a) requires the storage of N · (s +1) LLRs and N · s PPS, but the proposed sequential SC decoding process requires only the storage ofs+1)·ns-1+1)·...)·n1+1 LLR and (.. n. (n)s·ns-1+1)·ns-2+1)·...)·n1The total PS:
Figure BDA0002355727540000075
the proposed sequential SC decoding process thus simplifies the decoding and significantly reduces the memory requirements compared to a recursive decoding process.

Claims (14)

1. A decoder for multi-core polar code decoding, the decoder configured to determine decoded bit values in an order, the decoder further configured to:
propagating statistics representing a preliminary estimate of a codeword bit received over a noisy channel through a plurality of decoding stages, and the decoding stages including a plurality of kernel units representing different sized polar code kernels, the kernel units for determining an output statistics based on one or more input statistics, wherein the output statistics of a kernel unit of a previous decoding stage serve as input statistics of a kernel unit of a subsequent decoding stage, each kernel unit corresponding to a kernel TpAnd updating the statistical values and/or the partial sums;
storing, by the plurality of decoding stages, output statistics of core cells involved in propagating the statistics in a memory element of a memory;
replacing a first output statistical value based on the one or more input statistical values with a second output statistical value based on the one or more input statistical values, wherein the first output statistical value and the second output statistical value are intermediate values of Log Likelihood Ratio (LLR) update of the kernel unit from right to left, and the second output statistical value is an updated value of the first output statistical value;
determining a first decoding bit value based on output statistics of a core unit of a final decoding stage;
propagating the first decoded bit value through a subset of a plurality of decoding stages and storing a first partial sum determined from the propagated first decoded bit value in a first memory element of a memory of the decoder;
determining a second decoded bit value based on the stored first partial sum and at least a portion of the propagated statistics;
propagating the second decoded bit value through a subset of the plurality of decoding stages and storing a second partial sum determined from the propagated second decoded bit value in the memory, wherein the stored second partial sum consumes storage space gained by freeing the first memory element.
2. The decoder of claim 1, wherein the number of core units involved in propagating the statistics to determine the first decoded bit value decreases progressively from the previous stage to the subsequent stage.
3. Decoder according to claim 1 or 2, characterized in that different decoding stages comprise different numbers of core units, wherein the core units of the different decoding stages differ in the number of input statistics.
4. Decoder according to claim 1 or 2, characterized in that the number of input statistics of the core unit may be two, three, five or more.
5. Decoder according to claim 1 or 2, characterized in that the number of memory elements dedicated to store the partial sum of the final decoding stage is smaller than the number of memory elements dedicated to store the partial sum of the penultimate decoding stage, wherein the ratio of the numbers is equal to the number of input values of the kernel units of the penultimate decoding stage.
6. The decoder according to claim 1 or 2, wherein the decoder is configured to overwrite the first partial sum into the second partial sum in order.
7. The decoder according to claim 1 or 2, characterized in that the statistical value is one of the following:
a log-likelihood ratio;
a likelihood ratio; or
Likelihood.
8. A method of sequentially depolarising codes, the method comprising:
propagating statistics representing preliminary estimates of codeword bits received over noisy channels through a plurality of decoding stages, the decoding stages comprising a plurality of kernel units representing different sizes of polar code kernels, the kernel units for determining output statistics based on one or more input statistics, wherein the output statistics of a kernel unit of a previous decoding stage serve as input statistics of a kernel unit of a subsequent decoding stage, each kernel unit corresponds to a kernel Tp, and updating the statistics and/or partial sums;
storing, by the plurality of decoding stages, output statistics of core cells involved in propagating the statistics in a memory element of a memory;
replacing a first output statistical value based on the one or more input statistical values with a second output statistical value based on the one or more input statistical values, wherein the first output statistical value and the second output statistical value are intermediate values of Log Likelihood Ratio (LLR) update of the kernel unit from right to left, and the second output statistical value is an updated value of the first output statistical value;
determining a first decoding bit value based on output statistics of a core unit of a final decoding stage;
propagating the first decoded bit value through a subset of the plurality of decoding stages and storing a first partial sum determined from the propagated first decoded bit value in a first memory element of a memory of a decoder;
determining a second decoded bit value based on the stored first partial sum and at least a portion of the propagated statistics;
propagating the second decoded bit value through a subset of the plurality of decoding stages and storing a second partial sum determined from the propagated second decoded bit value in the memory, wherein the stored second partial sum consumes storage space gained by freeing the first memory element.
9. The method of claim 8, wherein propagating the statistics to determine the number of core cells involved in the first decoded bit value decreases from the previous stage to the subsequent stage.
10. Method according to claim 8 or 9, characterized in that different decoding stages comprise different numbers of core units, wherein the core units of the different decoding stages differ in the number of input statistics.
11. Method according to claim 8 or 9, characterized in that the number of input statistics of the kernel unit may be two, three, five or more.
12. Method according to claim 8 or 9, characterized in that the number of memory elements dedicated to store the partial sum of the final decoding stage is smaller than the number of memory elements dedicated to store the partial sum of the penultimate decoding stage, wherein the ratio of the numbers is equal to the number of input values of the kernel units of said penultimate decoding stage.
13. The method according to claim 8 or 9, comprising:
rewriting the first partial sum to the second partial sum in order.
14. The method according to claim 8 or 9, characterized in that said statistical value is one of the following:
a log-likelihood ratio;
a likelihood ratio; or
Likelihood.
CN201780092919.XA 2017-07-05 2017-07-05 Multi-kernel polar code decoding Active CN110832783B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/066747 WO2019007495A1 (en) 2017-07-05 2017-07-05 Decoding of multi-kernel polar codes

Publications (2)

Publication Number Publication Date
CN110832783A CN110832783A (en) 2020-02-21
CN110832783B true CN110832783B (en) 2022-07-22

Family

ID=59581831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780092919.XA Active CN110832783B (en) 2017-07-05 2017-07-05 Multi-kernel polar code decoding

Country Status (2)

Country Link
CN (1) CN110832783B (en)
WO (1) WO2019007495A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953077A (en) * 2008-03-28 2011-01-19 高通股份有限公司 The deinterleaving mechanism that relates to many row LLR buffers
CN105227189A (en) * 2015-09-24 2016-01-06 电子科技大学 The polarization code coding and decoding method that segmentation CRC is auxiliary
CN105634507A (en) * 2015-12-30 2016-06-01 东南大学 Assembly-line architecture of polarization code belief propagation decoder

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9176927B2 (en) * 2011-11-08 2015-11-03 The Royal Institution For The Advancement Of Learning/Mcgill University Methods and systems for decoding polar codes
KR102128471B1 (en) * 2014-03-11 2020-06-30 삼성전자주식회사 List decoding method for polar codes and memory system adopting the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953077A (en) * 2008-03-28 2011-01-19 高通股份有限公司 The deinterleaving mechanism that relates to many row LLR buffers
CN105227189A (en) * 2015-09-24 2016-01-06 电子科技大学 The polarization code coding and decoding method that segmentation CRC is auxiliary
CN105634507A (en) * 2015-12-30 2016-06-01 东南大学 Assembly-line architecture of polarization code belief propagation decoder

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hardware Architectures for Successive Cancellation Decoding of Polar Codes;Camille Leroux et al;《2011 IEEE International Conference on Acoustics, Speech and Signal Processing》;20110527;正文第1-6页 *
Multi-Kernel Construction of Polar Codes;Frederic Gabry et al;《www.arxiv.org》;20161119;正文第1-3页 *
Software Polar Decoder on an Embedded Processor;Bertrand Le Gal et al;《2014 IEEE Workshop on Signal Processing Systems》;20141022;第1665-1668页 *

Also Published As

Publication number Publication date
CN110832783A (en) 2020-02-21
WO2019007495A1 (en) 2019-01-10

Similar Documents

Publication Publication Date Title
TWI419481B (en) Low density parity check codec and method of the same
KR100738864B1 (en) Decoder and decoding method for decoding low-density parity-check codes with parity check matrix
CN101243664B (en) In-place transformations with applications to encoding and decoding various classes of codes
KR100958234B1 (en) Node processors for use in parity check decoders
CA2575953C (en) Memory efficient ldpc decoding methods and apparatus
JP3923618B2 (en) Method for converting information bits having error correcting code and encoder and decoder for performing the method
US20150295593A1 (en) Apparatus and method for encoding and decoding data in twisted polar code
CN101248583A (en) Communication apparatus and decoding method
US7451376B2 (en) Decoder and decoding method for decoding irregular low-density parity-check codes
US20080134008A1 (en) Parallel LDPC Decoder
CN109983705B (en) Apparatus and method for generating polarization code
CN110545162B (en) Multivariate LDPC decoding method and device based on code element reliability dominance degree node subset partition criterion
CN110832783B (en) Multi-kernel polar code decoding
US20100185913A1 (en) Method for decoding ldpc code and the circuit thereof
CN110892644B (en) Construction of a polar code, in particular a multi-core polar code, based on a distance criterion and a reliability criterion
JP2006135813A (en) Low density parity check encoder/decoder and encoding/decoding method
US8072359B2 (en) Binary arithmetic coding device
Song et al. A novel iterative reliability-based majority-logic decoder for NB-LDPC codes
CN112583420A (en) Data processing method and decoder
Bioglio et al. Memory management in successive-cancellation based decoders for multi-kernel polar codes
EP3526899A1 (en) Decoding of low-density parity-check convolutional turbo codes
Spinner et al. Soft input decoding of generalized concatenated codes using a stack decoding algorithm
Hashemian Condensed Huffman coding, a new efficient decoding technique
TW201740687A (en) Decoding method and decoder for low density parity check code
JP5434454B2 (en) Decryption device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant