WO2019007495A1 - Decoding of multi-kernel polar codes - Google Patents

Decoding of multi-kernel polar codes Download PDF

Info

Publication number
WO2019007495A1
WO2019007495A1 PCT/EP2017/066747 EP2017066747W WO2019007495A1 WO 2019007495 A1 WO2019007495 A1 WO 2019007495A1 EP 2017066747 W EP2017066747 W EP 2017066747W WO 2019007495 A1 WO2019007495 A1 WO 2019007495A1
Authority
WO
WIPO (PCT)
Prior art keywords
values
kernel
decoding
partial sums
decoded bit
Prior art date
Application number
PCT/EP2017/066747
Other languages
French (fr)
Inventor
Valerio BIOGLIO
Ingmar LAND
Frederic GABRY
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2017/066747 priority Critical patent/WO2019007495A1/en
Priority to CN201780092919.XA priority patent/CN110832783B/en
Publication of WO2019007495A1 publication Critical patent/WO2019007495A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes

Abstract

Provided is a procedure for sequentially decoding a polar code. The procedure comprises propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory. The procedure is continued by determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, and propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.

Description

DECODING OF MULTI-KERNEL POLAR CODES
FIELD
The present disclosure relates to multi-kernel polar codes. In particular, the present disclosure relates to a procedure for decoding multi-kernel polar codes.
BACKGROUND
Polar codes, described by Arikan in "Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels," IEEE TRANSACTIONS ON INFORMATION THEORY, July 2009, define a new class of forward which are based on the polarization effect of the Kronecker products (g) which is called the "kernel" of the polar code. The polarization
Figure imgf000003_0001
effect causes that bits at some positions of an encoding vector are more reliably decodable (after encoding and transmitting) than others.
The polarization effect can be exploited by using the more reliable positions for carrying information bits, while "freezing" the bits at less reliable positions, such that the decoder is aware of the values of the "frozen bits" when decoding. Given an encoding vector u of length N comprising K information bits and N— K "frozen bits", a codeword x of length N may be calculated as x = u GN, where GN = T®n is the transformation matrix of the polar code. Thus, while the number of information bits K can be freely chosen, possible code lengths are restricted to N = 2n.
To mitigate the code length problem, multi-kernel polar codes have been proposed which are constructed based on kernels of different sizes (cf. Gabry et al. in "Multi-Kernel Construction of Polar Codes," arXiv: 1612.06099, Dec. 2016), as the polarization effect remains valid for transformation matrices of the type GN = Tn (8) ■■■ <8) Tn (where s is the number of kernels), which enables codes of length N = n · ... · ns. However, decoding multi-kernel polar codes using a recursive successive cancellation (SC) algorithm, as described by Gabry et al. in "Multi-Kernel Construction of Polar Codes," arXiv: 1612.06099, Dec. 2016, may require large amounts of memory. SUMMARY
In the following, an improved procedure for sequentially decoding multi-kernel polar codes is described which allows reducing the memory requirement from 0 (NlogN) to 0 (N) .
According to a first aspect of the present invention, there is provided a decoder for decoding a multi-kernel polar code. The decoder is configured to sequentially determine decoded bit values by propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes. Each of the multiple kernel units is configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage.
The decoder is further configured to determine first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagate the first decoded bit values through a subset of the multiple decoding stages and store first partial sums, determined from the propagated first decoded bit values, in first memory elements of a memory. The decoder is further configured to determine second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, and propagate the second decoded bit values through a subset of the multiple decoding stages and store second partial sums, determined from the propagated second decoded bit values, in the memory, wherein the stored second partial sums are to consume memory space gained by releasing the first memory elements.
By using memory space gained by releasing the first memory elements, an overall amount of memory required for decoding can be kept below a pre-determined value and constant after having determined the first decoded bit values. The releasing may be performed when a statistical value/partial sum is no longer needed for carrying out the remaining steps of the decoding procedure and before a new statistical value/partial sum has been determined that requires the memory space gained by the releasing.
In other words, the decoding procedure may perform decoding steps in an order which minimizes the amount of required memory by selecting a next decoding step from a set of possible by decoding steps such that statistical values and partial sums required for decoding a next bit value are prioritized while statistical values and partial sums not required for decoding the next bit value are delayed until the next bit value has been decoded. By this, a "decoding path" winding past the kernel units of the decoding stages may be generated/followed which ensures a "just-in-time" calculation of statistical values and partial sums, while the bit values are successively decoded.
In this regard, it is noted that the term "decoding stage" as used throughout the description and claims in particular refers to hardware, software, or a combination of hardware and software which implements a plurality of kernel units of size p, wherein each kernel unit corresponds to a kernel Tp and performs an update on statistical values and/or partial sums. In case of updating statistical values, a kernel unit may take p statistical values and p partial sums as input and output p statistical values. In case of updating partial sums, a kernel unit may take p partial sums as input and output p partial sums.
Moreover, the term "memory element" as used throughout the description and claims in particular refers to a register or an address of a physical memory device such as, for example, a random-access memory (RAM). In particular, the size of a memory element may correspond to the size of the format in which statistical values/partial sums are stored.
In a first possible implementation form of the decoder according to the first aspect, the decoder is configured to store output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages, in memory elements of the memory. Furthermore, the decoder is configured to replace a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.
Hence, statistical values that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements.
In a second possible implementation form of the decoder according to the first aspect, a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding decoding stage to the succeeding decoding stage.
Thus, kernel units may be operated successively, thereby further reducing hardware requirements, as hardware may be used to successively perform calculations of different kernel units.
In a third possible implementation form of the decoder according to the first aspect, different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.
For instance, a first decoding stage may comprise e kernel units, wherein each kernel unit of the first decoding stage is to receive / input statistical values, while a second decoding stage may comprise g kernel units, wherein each kernel unit of the second decoding stage is to receive h input statistical values with e f = g h. Each kernel in the transformation matrix may have a kernel unit assigned thereto, wherein the number of input values of a kernel unit corresponds to the size of the kernel to which the kernel unit is assigned.
This structure may facilitate applying/adapting the decoding procedure to different multi- kernel codes.
In a fourth possible implementation form of the decoder according to the first aspect, the number of input statistical values of a kernel unit is two, three, five, or more.
For example, each kernel unit of the first decoding stage may be to receive two input statistical values while each kernel unit of the second decoding stage may be to receive three input statistical values, or vice versa.
In a fifth possible implementation form of the decoder according to the first aspect, a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage. For instance, if a penultimate decoding stage comprises kernel units of size three and the memory comprises two memory elements dedicated to storing the partial sums of the ultimate decoding stage, there may be six memory elements dedicated to storing the partial sums of the penultimate decoding stage. In a sixth possible implementation form of the decoder according to the first aspect, the decoder is configured to sequentially overwrite the first partial sums with the second partial sums.
Hence, partial sums that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements for storing the second partial sums. In particular, the memory may be adapted to only provide memory space for either the first or second partial sums/statistical values.
In a seventh possible implementation form of the decoder according to the first aspect, the statistical values are one of log-likelihood ratios, likelihood ratios, or likelihoods. According to a second aspect of the present invention, there is provided a method of sequentially decoding a polar code. The method comprises propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes. Each kernel unit is configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage.
The method further comprises determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of a decoder. The method further comprises determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.
The method may involve the decoder according to the first aspect and achieve the same/similar advantages. Hence, any disclosure herein relating to the decoder is intended to also relate to the method and vice-versa, unless specified otherwise, or inapplicable. In a first possible implementation form of the method according to the second aspect, the method comprises storing output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory and replacing a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.
Hence, as indicated above, statistical values that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements.
In a second possible implementation form of the method according to the second aspect, a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.
Thus, as indicated above, kernel units may be operated successively, thereby further reducing hardware requirements, as hardware may be used to successively perform calculations of different kernel units. In a third possible implementation form of the method according to the second aspect, different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values. As indicated above, this structure may facilitate applying/adapting the decoding procedure to different multi-kernel codes.
In a fourth possible implementation form of the method according to the second aspect, the number of input statistical values of a kernel unit is two, three, five, or more. In a fifth possible implementation form of the method according to the second aspect, a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage. In a sixth possible implementation form of the method according to the second aspect, the method comprises sequentially overwriting the first partial sums with the second partial sums.
Hence, as indicated above, partial sums that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements for storing the second partial sums. In particular, the memory may be adapted to only provide memory space for either the first or second partial sums/statistical values.
In a seventh possible implementation form of the method according to the second aspect, the statistical values are one of log-likelihood ratios, likelihood ratios, or likelihoods.
Furthermore, some or all steps of the method may be performed by a processor in accordance with instructions persistently stored on a tangible machine-readable medium according to a third aspect of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 shows a block diagram illustrating a generic digital communication system in which elements of the present disclosure may be implemented. Fig. 2 shows a flow-chart of steps of a sequential decoding procedure. Fig. 3 shows a block diagram of a kernel unit 30.
Fig. 4 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G12 = T2 (g) T2 (g) T3.
Fig. 5 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G12 = T2 (g) T3 (g) T2.
Fig. 6 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G12 = T3 (g) T2 (g) T2.
DETAILED DESCRIPTION
The following provides a non-limiting example of a sequential decoding procedure for multi- kernel polar codes. The example illustrates the sequential decoding procedure for kernels of size 2 and 3 which allow constructing multi-kernel polar codes with a block length of N = 2n2 . 3n3 However, the sequential decoding procedure may also involve kernels of other sizes. As classical polar codes (having only kernels of the same size) are a subclass of multi- kernel polar codes, the proposed sequential decoding procedure may also be used to decode classical polar codes.
Fig. 1 shows a block diagram illustrating a generic digital communication system 10 in which the proposed sequential decoding procedure may be implemented. The system 10 includes a transmitting side, comprising an encoder 12, and a receiving side, comprising a decoder 14. The input of the encoder 12 at the transmitting side may be an encoding vector u of length N comprising K information bits and N— K "frozen bits," from which the decoder 12 calculates a codeword x.
The codeword x may be forwarded to a modulator 16 which may transform the codeword x into a modulated signal vector CH_IN. The modulated signal vector CH_IN may in turn be transmitted through a channel 18 such as, for example, a wired or wireless channel to a demodulator 20. Since the channel 18 is usually subject to noisy disturbances, the channel output CHjOUT may differ from the channel input CH_IN. At the receiving side, the channel output vector CHjOUT may be processed by the demodulator 20. The demodulator 20 may produce statistical values such as, for example, log- likelihood ratios (LLRs), likelihood ratios (LRs), or likelihoods (Ls) which are to indicate probabilities with which the channel output vector CHjOUT corresponds to a particular bit sequence. The decoder 14 may use the redundancy in the codeword x in a sequential decoding procedure to decode the K information bits.
The encoding and decoding procedures may be governed by a multi-kernel polar code. The encoding procedure in the encoder 12 may hence be based on rows of a transformation matrix GN. The proposed sequential decoding procedure in the decoder 14 may comprise the steps shown in Fig. 2, which may be implemented by customized hardware (e.g., an FPGA) or a processor.
At step 20, statistical values may be propagated through multiple decoding stages 40-42 (cf. Fig. 4b, in which the statistical values are LLRs). Each decoding stage may comprise one or more multiple kernel units 30 (cf. Fig. 3) of the same size. Each kernel unit 30 may represent a kernel of the transformation matrix GN.
As shown in Fig. 3, a kernel unit 30 may be configured to determine output statistical values based on input statistical values. For example, a kernel unit of size p may perform an update of the LLRs and the PSs. In case of updating LLRs, the kernel unit 30 may take p LLRs and p PSs as input and output p LLRs, wherein the output LLRs may be used for decoding different information bits. In case of updating PSs, the kernel unit may take p PSs as input and output p PSs.
In Fig. 3, the input for updating the LLRs is received on the right side of the kernel unit 30 while the output is provided on the left side of the kernel unit 30. For updating PSs, the input is received on the left side of the kernel unit 30 and the output is provided on the right side of the kernel unit 30. The operations to be performed on the input depend on the transformation matrix Tp defining the kernel and the position of the information bit to be decoded.
As an example, the update operations performed by two kernels of sizes two and three are described in the following: The process implemented by a kernel unit of a kernel T2 = when used in the decoding procedure may be described as follows: The LLR update function of the kernel unit takes two LLRs, L0 and Ll 5 and two PSs, x0 and 1 ; as input and calculates two different = L0 EB h and λ1 = (— 1)X°L0 + L1 , with | έ =
2 tanh'1 (tanh In a similar manner,
Figure imgf000012_0001
the PS update function of the kernel unit takes two PSs, u0 and i^ , as input and calculates two different PSs IS Q U.Q φ u and
/I 1 i\
The process implemented by a kernel unit of a kernel T3 = I 1 0 1 ) when used in
VO 1 1/
the decoding procedure may be described as follows: The LLR update function of the kernel unit takes three LLRs, L0, Ll5 and L2, and three PSs, x0, 1 ; and x2, as input and calculates three different LLRs as λ0 = L0 H h H L2, λ1 = (— 1)X°L0 + ffl L2 and l2 = ( + (— 1) ½ ®:¾1 L2 . In a similar manner, the PS update function of the kernel unit takes three PSs, x0, x1, and x2, as input and calculates three different PSs as XQ — 1 .Q I .^ , x^— 1 .Q 1 .2 and 2 ~ ^2 · As shown in Fig. 4b, illustrating the Tanner graph of a polar code governed by a transformation matrix of the form G12 = T2 <£) T2 <£) T3, the output statistical values of the kernel units 30 of a preceding decoding stage 40 serve as input statistical values of the kernel units 30 of a succeeding decoding stage 41.
At step 22 of Fig. 2, the procedure is continued by determining first decoded bit values ut (e.g., i = {0,1,2} ) based on output statistical values of a kernel unit 30 of an ultimate decoding stage 42.
At step 24, the first decoded bit values ut are propagated through a subset of the multiple decoding stages (e.g., decoding stage 42, or decoding stage 42 and decoding stage 41) and first partial sums (PSs), determined from the propagated first decoded bit values ut, are stored in first memory elements 43, as illustrated, for example, in Fig. 4c.
At step 26, the procedure involves determining second decoded bit values ut (e.g., i = {3,4,5}) based on the first stored partial sums and at least some of the propagated statistical values. At step 28, the second decoded bit values ut are propagated through a subset of the multiple decoding stages and second partial sums, determined from the propagated second decoded bit values u are stored in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements. As shown in Fig. 4 to Fig. 6, which illustrate transformation matrices, Tanner graphs, and memory structures of different transformation matrices G12, the LLRs and PSs may be stored in different memory structures. In particular, the LLRs may be stored as real numbers, while the PSs may be stored as bits. As shown in Fig. 4(c), Fig. 5(c), and Fig. 6(c), the memory structures depend on the order of the kernels defining the transformation matrix GN = Tn (8) ... <8> rns.
For example, the LLRs may be stored in s + 1 real vectors of different sizes (where s is the number of kernels of the transformation matrix GN). The length of the first vector may always be 1 (as indicated by the leftmost square in the lower part of Fig. 4(c), Fig. 5(c), and Fig. 6(c)). The length of the i-th vector may be given by the product of the last i — 1 kernel sizes, i.e., the length of the i-th vector may be given by ns_i+2 1 ... 1 ns.
The PSs may be stored in s binary matrices of different sizes (in the following, width and height of a matrix may be referred to by the number of columns and rows, respectively). The width of the i-th PS matrix may be given by the size of the (s— i + l)-th kernel, i. e., the width of the i-th PS matrix may be given by ns_i+1. The height of the i-th PS matrix may be given by the product of the last i— 1 kernel sizes, i.e., the height of the i-th PS matrix may be given by ns_i+2 1 ... 1 ns. The size of the last matrix may not follow this rule, but its width may be diminished by one, i.e., the last matrix may have a width of nx — 1.
The decoded bits may be stored in a binary vector U of length N or may be immediately output, once the decoded bits are no longer required in the remaining steps of the decoding procedure.
In the following, exemplary update rules for LLR, PS and U vectors and matrices are provided. It is started with the update rules for the LLR vectors, followed by the update rules for the binary vector U , and finally, update rules for the PS matrices are described. It is recalled, that the update rules are to be applied for every decoded bit. Moreover, the update may be performed by the kernel units 30. Initially, the (s + l)-th LLR vector (the rightmost in Fig. 4(c), Fig. 5(c), and Fig. 6(c)) may be filled with the N LLRs of the received symbol(s), while all other entries of LLR vectors, PS matrices and U vector may be set to zero.
LLR UPDATE
In order to perform the LLR update phase for bit i^, i may be expressed through a mixed radix numeral system which is based on the kernel order. A mixed radix numeral system is a non-standard positional numeral system in which the numerical base depends on the digit position (as in the case of time, which is measured in hours, minutes, and seconds).
The radix of the numeral system is given by the sizes of the kernels used to build the transformation matrix GN. As an example, the conversions for the transformation matrices G12 shown in Fig. 4(a), Fig. 5(a), and Fig. 6(a) are given in the following tables:
Figure imgf000014_0001
Figure imgf000014_0002
Figure imgf000014_0003
The LLR update in Fig. 4(b)(c), Fig. 5(b)(c), and Fig. 6(b)(c) proceeds from right to left. The vectors may be updated starting from the s-th vector using the subsequent vector. In general, the j-th vector may be updated using the (J + l)-th vector and the kernel Tn +1. Entries of the j-th vector may be updated one by one using ns_j+1 entries from the (j + l)-th LLR vector and from the j-th PS matrix. The LLR update rule may be selected in accordance with the mixed radix representation of i. If bit Ui has to be decoded in accordance with the mixed radix representation i = b1b2 ... bs, the LLR update rule b . may be used to update the j-th vector. In order to update the k-th entry of the j-th vector, the ns_j+1 LLRs to be used by the update formula may be selected as the entries (k— l)ns_j+1 + 1, ... , kns_j+1 of the (J + l)-th vector. Similarly, the ns_j+1 PSs to be used by the update formula may be selected as the ns_j+1 entries of the k-t row j-t matrix. Since the s-th matrix has only ns_j+1— 1 columns, the missing bits may be considered as zeros.
Using said LLR update algorithm, s LLR vectors have to be updated for every decoded bit. However, as the mixed radix expansions of two consecutive numbers differ only on the right of the position of the rightmost nonzero element of the second number, it is not necessary to update the vectors represented by the positions at the left of the rightmost nonzero entry. Of course, this does not apply for the case i = 0, in which all the vectors have to be updated.
Ui UPDATE
If the decoded bit Ui is not frozen, its value may be obtained through a hard decision on the only element of the unitary first LLR vector. Otherwise, if Ui is frozen, it may be set to zero.
PS UPDATE
PS matrices may be updated per columns, in increasing order from left to right (cf. Fig. 4(b)(c), Fig. 5(b)(c), and Fig. 6(b)(c)). When the last column of a PS matrix is filled, a column of the next PS matrix may be updated. Updates may always start from the first PS matrix, which is a column vector of width ns. The value of the decoded bit Ui may be copied in the column bs.
When bs = ns— 1, the matrix is filled, and the column £)s_x of the second matrix may be updated. If bj = rij— 1, a matrix is filled and the next matrix may be updated, otherwise the process ends. When the j-th matrix is filled, i.e. if bs_j+1 = ns_j+1— 1, the column bs_j of the (j + l)-th matrix may be updated. In this case, each row of the j-t matrix may be used to update a part of the column bs_j of the (J + l)-th matrix. In particular, the ns_j+1 PS update rules of the kernel Tn +1 with the k -th row of j -th matrix as input may be used to update the rows (k— l ns_j+1 + 1, ... , kns_j+1 of the column bs_j of the (j + l)-th matrix. As an exception, for the last bit, the PS update step may not be executed, since no new LLR update step is required.
The proposed sequential SC decoding procedure permits to reduce the amount of memory required for decoding multi-kernel polar codes. While a recursive implementation of the SC decoder for a multi-kernel polar code with transformation matrix GN = Tn (8)■■■ <8) Tn with N = ... ns requires storing N (s + 1) LLRs and N s PSs, the proposed sequential SC decoding procedure only requires to store (... (ns + 1) + 1) ... ) + 1 LLRs and (... (ns s_! + 1) ns_2 + !) ... ) n PSs:
Figure imgf000016_0001
Thus, the proposed sequential SC decoding procedure simplifies decoding and allows significantly reducing the memory requirement as compared to recursive decoding procedures.

Claims

1. A decoder for decoding a multi-kernel polar code, the decoder being configured to sequentially determine decoded bit values, wherein the decoder is further configured to: propagate statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, a kernel unit being configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage; determine first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage; propagate the first decoded bit values through a subset of the multiple decoding stages and store first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of the decoder; determine second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values; and propagate the second decoded bit values through a subset of the multiple decoding stages and store second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums are to consume memory space gained by releasing the first memory elements.
2. The decoder of claim 1, wherein the decoder is configured to: store output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory; and replace a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.
3. The decoder of claim 1 or 2, wherein a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.
4. The decoder of claim 1 to 3, wherein different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.
5. The decoder of any one of claims 1 to 4, wherein the number of input statistical values of a kernel unit is two, three, five, or more.
6. The decoder of any one of claims 1 to 5, wherein a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage.
7. The decoder of any one of claims 1 to 6, wherein the decoder is configured to sequentially overwrite the first partial sums with the second partial sums.
8. A method of sequentially decoding a polar code, the method comprising: propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, a kernel unit being configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage; determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage; propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of a decoder; determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values; and propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.
9. The method of claim 8, comprising: storing output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory; and replacing a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.
10. The method of claim 8 or 9, wherein a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.
11. The method of any one of claims 8 to 10, wherein different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.
12. The method of any one of claims 8 to 11, wherein the number of input statistical values of a kernel unit is two, three, five, or more.
13. The method of any one of claims 8 to 12, wherein a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage.
14. The method of any one of claims 8 to 13, comprising: sequentially overwriting the first partial sums with the second partial sums.
15. The decoder/method of any one of the preceding claims wherein the statistical values are one of:
log-likelihood ratios;
likelihood ratios; or
likelihoods.
PCT/EP2017/066747 2017-07-05 2017-07-05 Decoding of multi-kernel polar codes WO2019007495A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2017/066747 WO2019007495A1 (en) 2017-07-05 2017-07-05 Decoding of multi-kernel polar codes
CN201780092919.XA CN110832783B (en) 2017-07-05 2017-07-05 Multi-kernel polar code decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/066747 WO2019007495A1 (en) 2017-07-05 2017-07-05 Decoding of multi-kernel polar codes

Publications (1)

Publication Number Publication Date
WO2019007495A1 true WO2019007495A1 (en) 2019-01-10

Family

ID=59581831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/066747 WO2019007495A1 (en) 2017-07-05 2017-07-05 Decoding of multi-kernel polar codes

Country Status (2)

Country Link
CN (1) CN110832783B (en)
WO (1) WO2019007495A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8572332B2 (en) * 2008-03-28 2013-10-29 Qualcomm Incorporated De-interleaving mechanism involving a multi-banked LLR buffer
US9176927B2 (en) * 2011-11-08 2015-11-03 The Royal Institution For The Advancement Of Learning/Mcgill University Methods and systems for decoding polar codes
KR102128471B1 (en) * 2014-03-11 2020-06-30 삼성전자주식회사 List decoding method for polar codes and memory system adopting the same
CN105227189B (en) * 2015-09-24 2019-01-01 电子科技大学 It is segmented the polarization code coding/decoding method of CRC auxiliary
CN105634507B (en) * 2015-12-30 2019-05-17 东南大学 The pipeline system of polarization code belief propagation decoder

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ARIKAN: "Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,", IEEE TRANSACTIONS ON INFORMATION THEORY, July 2009 (2009-07-01)
CAMILLE LEROUX ET AL: "Hardware architectures for Successive Cancellation Decoding of Polar Codes", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 12 November 2010 (2010-11-12), XP080462225, DOI: 10.1109/ICASSP.2011.5946819 *
FREDERIC GABRY ET AL: "Multi-Kernel Construction of Polar Codes", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 December 2016 (2016-12-19), XP080745345 *
GABRY ET AL.: "Multi-Kernel Construction of Polar Codes", ARXIV: 1612.06099, December 2016 (2016-12-01)
GABRY ET AL.: "Multi-Kernel Construction of Polar Codes", ARXIV:1612.06099, December 2016 (2016-12-01)
LE GAL BERTRAND ET AL: "Software polar decoder on an embedded processor", 2014 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), IEEE, 20 October 2014 (2014-10-20), pages 1 - 6, XP032709393, DOI: 10.1109/SIPS.2014.6986083 *

Also Published As

Publication number Publication date
CN110832783B (en) 2022-07-22
CN110832783A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
KR100983692B1 (en) Communication apparatus and decoding method
JP4282558B2 (en) Low density parity check code decoder and method
CN106998208B (en) Code word construction method of variable length Polar code
KR101583139B1 (en) High-Throughput Low-Complexity Successive-Cancellation Polar Decoder Architecture and Method
CN109075804B (en) Communication apparatus and communication method using polarization code
CN109983705B (en) Apparatus and method for generating polarization code
EP2951925B1 (en) Ldpc code design and encoding apparatus enabling the adjustment of code rate and codelength
Trifonov Binary successive cancellation decoding of polar codes with Reed-Solomon kernel
CN111034057B (en) Equipment and method for generating multi-core polarization code
US11190214B2 (en) Construction of a polar code based on a distance criterion and a reliability criterion, in particular of a multi-kernel polar code
WO2019007495A1 (en) Decoding of multi-kernel polar codes
US6259388B1 (en) Multiplication-free arithmetic coding
KR101908389B1 (en) Error corrector coding and decoding
Rueda et al. Stochastic automata-based estimators for adaptively compressing files with nonstationary distributions
Bioglio et al. Memory management in successive-cancellation based decoders for multi-kernel polar codes
CN112583420A (en) Data processing method and decoder
EP3526899B1 (en) Decoding of low-density parity-check convolutional turbo codes
CN102270992B (en) Forward recursion probability acquisition method for Turbo decoder
TWI523437B (en) Encoding and syndrome computing co-design circuit for bch code and method for deciding the same
CN109217880B (en) Linear type polar code decoder and design method thereof
Zyablov et al. Estimation of the exponent of the decoding error probability for a special generalized LDPC code
WO2019172856A1 (en) Soft successive cancellation algorithm for polar codes
Shtarkov Universal coding algorithm for a family of context Markov sources
CN115706587A (en) Apparatus and method for decoding low density parity check code
JP5434454B2 (en) Decryption device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17751247

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17751247

Country of ref document: EP

Kind code of ref document: A1