WO2021226665A1  "channel encoding method and a channel encoder"  Google Patents
"channel encoding method and a channel encoder" Download PDFInfo
 Publication number
 WO2021226665A1 WO2021226665A1 PCT/AU2021/050438 AU2021050438W WO2021226665A1 WO 2021226665 A1 WO2021226665 A1 WO 2021226665A1 AU 2021050438 W AU2021050438 W AU 2021050438W WO 2021226665 A1 WO2021226665 A1 WO 2021226665A1
 Authority
 WO
 WIPO (PCT)
 Prior art keywords
 code
 codes
 precoding matrix
 kxn
 polar
 Prior art date
Links
Classifications

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/63—Joint error correction and other techniques
 H03M13/6312—Error control coding in combination with data compression

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
 H03M13/033—Theoretical methods to calculate these checking codes

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
 H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
 H03M13/13—Linear codes

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
 H03M13/2903—Methods and arrangements specifically for encoding, e.g. parallel encoding of a plurality of constituent codes

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
 H03M13/2945—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using at least three error correction codes

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/63—Joint error correction and other techniques

 H—ELECTRICITY
 H03—ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
 H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
 H03M13/13—Linear codes
 H03M13/15—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, BoseChaudhuriHocquenghem [BCH] codes
 H03M13/151—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, BoseChaudhuriHocquenghem [BCH] codes using error location or error correction polynomials
 H03M13/152—BoseChaudhuriHocquenghem [BCH] codes
Abstract
A channel encoding method and a channel encoder are disclosed. In an example, a method of data encoding includes dividing payload data to be transmitted into a set of information vectors and multiplying each of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector. The information vector includes k bits, and n is a power of 2. Additionally, n is greater than k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d. The method also includes multiplying each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword that has a Hamming weight of 0 or at least d. The method minimizes both successive cancellation (SC) and maximumlikelihood (ML) decoding error probabilities.
Description
CHANNEL ENCODING METHOD AND A CHANNEL ENCODER
TECHNICAL FIELD
The present disclosure generally relates to wireless communication technologies, and in particular, to a channel encoding method and a channel encoder.
BACKGROUND
In a communication system, channel encoding is usually performed to improve data transmission reliability and ensure communication quality. Use cases for the next generation of cellular services are divided into three main categories: enhanced mobile broadband (eMBB), massive machinetype communication (mMTC), and ultrareliable and lowlatency communications (URLLC). eMBB and mMTC are designed for services with high bandwidth requirements and communications between delaytolerant lowthroughput machinetype devices, respectively. URLLC focuses on services with strict delay constraints, such as industrial automation, selfdriving cars, smart grids, telesurgery, tactile Internet and intelligent transportation systems. Short block length channel codes with high errorcorrection capabilities are needed to meet the stringent latency and reliability requirements of URLLC. However, most of the channel codes are designed to approach the channel capacity for long block lengths.
SUMMARY
According to one nonlimiting aspect of the present disclosure, there is provided a channel encoding method. In one embodiment, the method includes dividing payload data to be transmitted into information vectors; multiplying an information vector to obtain corresponding input vector, where the information vector includes k bits by a kxn binary precoding matrix, and n is a power of 2, n>k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under constraint on the minimum distance d; multiplying the input vector including n bits by a nxn polarization transformation matrix to obtain corresponding codeword, where each codeword has a Hamming weight 0 or at least d.
According to another nonlimiting aspect of the present disclosure, there is provided an encoding device for a wireless communication system. In one embodiment, the encoding device includes a memory storage configured to store instructions, and one or more processors in
communication with the memory storage. The one or more processors execute the instructions to divide payload data to be transmitted into information vectors; multiply an information vector to obtain corresponding input vector, where the information vector includes k bits by a kxn binary precoding matrix, and n is a power of 2, n>k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under constraint on the minimum distance d; multiply the input vector including n bits by a nxn polarization transformation matrix to obtain corresponding codeword, where each codeword has a Hamming weight 0 or at least d.
According to the present disclosure, a preferred outcome is that the precoded polar codes can be decoded with much lower computational complexity, construction of the precoded polar codes is more flexible (code dimension can take any value), and construction of the precoded polar codes has higher errorcorrection capability.
It should be understood that the outcomes described herein are not limited, and may be any of or different from the outcomes described in the present disclosure.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgement or admission or any form of suggestion that the prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavor to which this specification relates.
BRIEF DESCRIPTION OF THE FIGURES
Features of a channel encoding method and a channel encoder described herein may be better understood by reference to the accompanying drawings in which:
FIG. 1 is a flowchart illustrating a communication system, according to an embodiment of the present disclosure.
FIG. 2 is a schematic diagram illustrating an example of wireless communication system, according to an embodiment of the present disclosure.
FIG. 3 is a flowchart illustrating a channel encoder, which is a component of a communication system, according to an embodiment of the present disclosure.
FIG. 4 is a flowchart illustrating an example of encoding process for (64,25,16) precoded polar code, according to an embodiment of the present disclosure.
FIG. 5 is a diagram illustrating performance of (64, 25) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 6 is a diagram illustrating performance of (64, 34) codes under stack successive cancellation decoding algorithm with list size L according to an embodiment of the present disclosure.
FIG. 7 is a diagram illustrating decoding complexity of (64, 25) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 8 is a diagram illustrating performance of (64, 34) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 9 is a diagram illustrating decoding complexity of (64, 25) codes under sequential decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 10 is a diagram illustrating performance of (64, 34) codes under sequential decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIGS. 11 and 12 are diagrams illustrating FER performance of codes listed in Table III and 5G New Radio polar codes with CRC, according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
The present disclosure generally relates to the wireless communication technologies, and in particular, to a channel encoding method and a channel encoder. Embodiments are described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements.
Various code surveys show that the original polar codes, turbo codes, and lowdensity paritycheck (LDPC) codes perform far away from the theoretical bounds in the short block length regime, while the best results under optimal decoding are demonstrated by BoseChaudhuri Hocquenghem (BCH) codes, followed by high memory order tailbiting convolutional codes. However, convolutional codes are unable to compete with the BCH in a high signaltonoise ratio (SNR) regime.
Polar codes can achieve the Shannon capacity of a wide class of data communication channels for large block lengths. These codes have a simple recursive structure, which allows low complexity decoding methods such as successive cancellation (SC) algorithm, SC list (SCL), and their variations.
The original polar codes can be constructed by freezing the least reliable bitsubchannels. This design principle is appropriate for long length codes and SC decoding. The original polar codes of short and moderate lengths demonstrate an inferior frame error rate (FER) compared to BCH, LDPC and turbo codes. The reasons are suboptimality of SC decoding and poor minimum distance of polar codes. It is possible to fully use the errorcorrection capability of polar codes by using SCL decoding instead of SC decoding. Several modifications of the polar code construction can be introduced to improve the minimum distance of polar codes.
Polar codes with frozen bitsubchannels selected to maximize the minimum distance result in ReedMuller codes. ReedMuller codes have a smaller FER than the original polar codes under maximumlikelihood (ML) decoding, but a higher FER under SC decoding. Hybrid codes, combining constructions of ReedMuller codes and the original polar codes, result in frozen bit subchannels “interpolates” between sets of frozen bitsubchannels of a ReedMuller code and the original polar code. It should be understood that ReedMuller codes are channelindependent, while polar codes are finetuned to the channel polarization phenomenon. Minimum distance properties may be also improved by concatenation of an inner polar code with an outer cyclic redundancy check (CRC).
It is possible to generalize the polar code construction by introducing dynamic frozen bits. As opposed to the original frozen bits, dynamic frozen bits are functions of information bits. These functions are called dynamic freezing constraints. The appropriate selection of dynamic freezing constraints leads to a further increase in the minimum distance. Code performance under SCL
decoding depends on both indices of frozen bits and dynamic freezing constraints, while only indices of frozen bits are important for SC decoding. Dynamic freezing constraints can be derived from highrate extended narrowsense primitive BCH (eBCH) codes, the resulting codes are called polar subcodes of eBCH codes. The tradeoff between SC and ML decoding error probabilities depend on the selected rate of eBCH code. BCH codes are known for their high minimum distance. Binary narrowsense primitive BCH codes have length 2^{m}  1 and form the most wellstudied subclass of BCH codes. Randomized polar subcodes give a performance improvement under SCL due to a reduced number of low weight codewords.
A heuristic construction of paritycheckconcatenated polar codes exists. It should be understood that parity checks may be viewed as freezing constraints. An algorithm to generate connections between information symbols and parity checks was developed, but no results on the achieved minimum distance were provided.
In the present disclosure, new short length codes configured to minimize the SC decoding error probability of precoded polar codes with a specified minimum distance are provided. The short length codes may be decoded through SCL and its variations. Numerical results show that the codes of lengths 32 and 64 provide a significant reduction of decoding error probability under stack SC and sequential decoding with moderate list sizes compared to polar subcodes of eBCH codes. The choice of polar subcodes of eBCH codes is explained by their high minimum distance. The codes have the same lengths and dimensions as known codes, hence the same code rates. The obtained weight enumerator polynomials and the tangential sphere bound indicate that reduction of SC decoding error probability is achieved without sacrificing the error probability under ML decoding.
The present disclosure also provides a method to recursively construct a collection of precoded polar codes that are optimized for SCLbased decoding. The proposed code design minimizes the SC decoding error probability estimate for each precoded polar code under three constraints. The first constraint is the minimum Hamming distance requirement, which improves the ML performance of the resulting code. The other two constraints specify that the resulting code belongs to a preselected code, called a supercode, and includes another preselected code, called a subcode. The subcode ensures reduction of the search space size, which is the space of all row candidates for the generator matrix.
In case of precoded polar codes, the search for rows of a generator matrix is equivalent to the search for rows of a precoding matrix. The subcode is selected among codes of lower dimension constructed by the same disclosed method. The supercode is obtained as the Plotkin sum of codes with the two times shorter length constructed earlier by the same method, where the component codes are nested to simplify the computation of lowweight codewords. These low weight codewords are used to ensure fast computation of the minimum Hamming distance. Note that the exact number of minimum weight codewords is not only computed, but also the exact values of several first terms of the code spectrum. The proposed choice of super codes and subcodes leads to a recursive structure and many nested codes in the resulting code collection. The simulation results indicate that the proposed precoded polar codes of lengths 128 and 256 provide a much lower block error probability than the original polar codes, polar codes with CRC from 5G New Radio and eBCH polar subcodes under the stack SC decoding.
Fig. 1 is a flowchart illustrating a communication system, which can employ the new codes for channel encoding according to embodiments of the present disclosure. Data, produced by an information source 104, is processed by a transmission system 101. The transmission system 101 comprises a source encoder 105 to compress data, a channel encoder 106 to ensure reliability by introducing redundancy into data, a modulator 107 to convert symbols into signals, and a transmitter 108 to send communication signals into a transmission/storage medium 102, which may introduce noise into signals. A reception system 103 comprises a receiver 109 to detect and receive the noisy signals, a demodulator 110 to produce soft or hard symbol estimates, a channel decoder 111 to reconstruct information from soft/hard symbol estimates, and a source decoder 112 to decompress information, which is further delivered to an information destination 113. It will readily be appreciated that the demodulator 110, channel decoder 111, and the source decoder 112 perform the inverse of the operations performed by the modulator 107, the channel encoder 106, and the source encoder 105, respectively, subject to limitations imposed by noise effects and other nonidealities in the system. Properly designed and operated communication system should deliver data from the information source 104 to information destination 113 in unaltered form with high probability.
It is to be understood that the present principles may be applied to various transmission systems and media. For example, the transmission medium can be represented by space or the atmosphere, in which case the communication system in Fig. 1 is a wireless communication
system. If the transmission medium is represented by a wire, e.g. fiber optic cable, then the communication system in Fig. 1 is a wired communication system. Hard disks drives, solid state disks, optical disk drives and magnetic tapes are examples of storage media employed in storage systems. Embodiments of the present disclosure can be implemented for both wireless and wired transmission systems, as well as for storage systems. The present disclosure focuses on wireless communication systems, in such a case the transmission system 101 and the reception system 103 include antenna(s) coupled respectively to the transmitter 108 and the receiver 109.
Fig. 2 shows an example of wireless communication system to which embodiments of the present disclosure can be applied. A network device 201 communicates with one or more terminal devices 206. Examples of the terminal devices 206 are a mobile device 202, an automated car 203, a drone 204, and an industrial robot 205, which are related to URLLC application scenarios. The network device 201 may be a base station or another device with similar communication functions. The network device 201 and the terminal devices 206 in Fig. 2 can include both the transmission system 101 and the reception system 103 from Fig. 1. The devices 201206 are the transmission systems when sending signals, and the reception systems when receiving signals. As transmission systems, the devices 201206 include channel encoders and perform encoding as described in the present disclosure.
Fig. 3 is a flowchart illustrating a channel encoder, which is a component of the communication system from Fig. 1. The channel encoder 307 includes components related to offline code design and an online encoding process. The online encoding process includes splitting input data into information vectors 304, encoding of the information vectors with the precoding matrix 305, and further encoding with the polarization transformation matrix 306 according to embodiments of the present disclosure. Encoding with the precoding matrix 305 is performed in accordance with system settings provided by a code selector 303. The code selector 303 is configured to choose code parameters and extract code specification from precoding matrix storage 302, which contains precoding matrices described in embodiments of the present disclosure. The precoding matrices are generated in advance offline by a precoding matrix constructor 301, operating according to the SCML optimization algorithm described in embodiments of the present disclosure.
Polar Codes
An (n = 2^{m}, k ) pure polar code with a set of frozen bits T <º U, \ T\ = n  k is a binary linear block code generated by k rows M_{i:}_ with indices i 6 Ί1 \ T, where M = B ^{■} A^{®m} is n x n polarization transformation matrix, A = ( j is 2 X 2 kernel of polarization, ®m denotes
mtimes Kronecker product of a matrix with itself, B is a bit reversal permutation matrix and U = {0, ..., n  1}.
Alternatively, an (n = 2^{m}, k ) pure polar code can be specified by the set of information bits c/Z = V. \ T, \<A I = k. Any codeword c = (c_{0}, . . . , c_{nx}) of a polar code can be represented as c = u · M, where u = ( u_{0} , , u_{n.±}) is an input vector consisting of k information bits and n  k frozen bits fixed with predefined values known to the decoder, while the positions of frozen bits are specified by T. For the purpose of simplicity, it is assumed that frozen bits of a pure polar are zero. The minimum distance of a pure polar code is given by the lowest row weight among rows of the matrix M with indices from the information set <A, since the minimum distance is lower bounded by the minimum distance of a ReedMuller code, being a supercode for the polar code. More precisely, the minimum distance of a pure polar code is given by
e c A], where wt(i)is the Hamming weight of the binary representation of the integer i. Thus, the minimum distance is equal to a power of two and it is defined by the set T, i.e. indices of frozen bits.
An (n, k ) pure polar code has a set of frozen bits T, given by indices of the n — k “worst” bitsubchannels. More precisely, they are chosen as n — k bit subchannels with the highest Bhattacharyya bound on the probability of decision error at the i — th input bit. Likewise, bit subchannels are commonly ranked according to their probabilities of error, which is a more straightforward criterion. Polar codes constructed according this rule are referred to as the original polar codes. The original polar codes are designed to minimize an upper bound on the SC decoding error probability.
In a general case of binaryinput memoryless output symmetric channels, estimating the quality of bit subchannels requires employing the density evolution. Efficient implementation of density evolution was developed in a known art. A lowcomplexity alternative based on Gaussian approximation of density evolution was developed for an AWGN channel in a known art.
SC and SCL Decoding
Consider a binaryinput memoryless outputsymmetric channel with output alphabet Y and transition probabilities PQy> 0) and P(_{/i}p\l), ^ 6 y. The SC decoding of a pure polar code of length ( n = 2^{m}) is performed in n phases. At the fth phase of the SC algorithm, an estimate iΐf of the input bit
6 {0, 1} is computed, where f = 0, ..., n  1. Since frozen bits iΐf have predefined fixed values known to the decoder, their estimates ύf are set to these values, where f 6 T. The estimate iΐf of the information bit iΐf is defined by the loglikelihood ratio (LLR)
where y = Qy>_{0}, , l^_{n}i) is the received noisy vector. If ί,f > 0, then the estimate iΐf 0 otherwise ύy < 1, where f E _{<}A.
The SC decoding error probability can be estimated as
here Pl is the probability of correct decoding for the ith bitsubchannel given a genie had revealed the true values of previous bits U_{j}, 0 < j < i. Here correct decoding corresponds to the event ii_{L} = u_{L}. The error correction capability can be improved using SCL, which constructs a list of vectors (u_{0}, u_{nt}) instead of a single one. Observe that SCL decoding with list size L = 1 reduces to SC decoding. The performance of the SCL decoding with the sufficiently large list size approaches the performance of the maximumlikelihood (ML) decoding. A similar performance, with a much lower complexity compared to the SCL, can be obtained by employing the sequential decoding algorithm.
PolarLike Codes: Dynamic Frozen Bits
It is possible to generalize the polar code construction by setting frozen bits to some linear functions of the information bits instead of fixed values. This does not affect the behaviour of the SC decoder and the performance of bitsubchannels of the polarization transformation.
Since n X n matrix M is invertible, any ( n = 2^{m}, k, d) binary linear code C with check matrix H can be obtained as an appropriate subspace of matrix M row space. Constraints for input
vector u may be specified by the equation u · M ^{■} H^{T} = 0. By applying the Gaussian elimination, one can construct a freezing constraint matrix V = Q · H ^{■} M^{T} such that the values h(ί) = max{tP_{j t} = l], are distinct for i = 0, . . . , n — k — 1, where Q is an (n — k) x (n — k) invertible matrix. That is the ith rows of the matrix V end in the distinct columns p(i). The set of frozen bits is given by T = (77 (i) 10 < i < n — k}. SC/SCL decoding is enabled by computing the values of the frozen bits as
A frozen bit u_{v}^ equal to a linear combination of preceding input bits is referred to as a dynamic frozen bit.
A construction of polar codes with dynamic frozen bits based on narrowsense primitive e BCH codes was introduced in a known art. According to it, a code of the length n = 2^{771} and dimension k is constructed as follows. At first an eBCH code with such parameters (n, K, d) is selected, that the minimum distance d is sufficiently high and the dimension K ³ k. This eBCH code is used to derive freezing constraints for n — K dynamic frozen bits. Then K — k additional freezing constraints are imposed to obtain an (n, k < K, ³ d) code called a polar subcode of e BCH code. Here the notation (n, k, ³ d ) is used to denote the code of length n, dimension k and with the minimum distance at least d.
Polar subcodes of eBCH codes outperform the pure polar codes under the SCL decoding with a sufficiently large list size. The SC decoding error probability for a code obtained in this way is not less than that of the original polar codes constructed for the same channel.
Minimum Distance
It should be understood that the minimum distance denotes the minimum Hamming distance throughout this disclosure. The minimum distance of pure polar codes scales at most roughly as Vn. Since their minimum distance is relatively low, the original polar codes are unable to compete with eBCH codes and ReedMuller codes under ML decoding. The performance of codes in high signaltonoise ratio (SNR) region is mostly defined by the minimum distance. A ReedMuller code can be represented as a pure polar code designed to maximize the minimum
distance. Thus, it is possible to construct the pure polar codes with the higher minimum distance than that of the original polar codes, i.e. the ML decoding error probability may be reduced by the appropriate selection of the frozen bitsubchannels. So, there are pure polar codes being able to outperform the original polar codes under SCL decoding. However, the minimum distance of a pure polar code, as well as of a ReedMuller code, may be equal only to a power of two. Moreover, even if there exists an (n = 2^{m}, k, d = 2^{W}) binary linear block code, there is no guarantee of the existence of a pure polar code with the same parameters, e.g. there is an (64, 24, 16) eBCH code, but the highest dimension of a pure polar code of length 64 and the minimum distance 16 is equal to 22. So, polar codes with dynamic frozen bits are able to outperform pure polar codes under SCL decoding.
It should be understood that in case of polar code design, the price for performance improvement under ML decoding is performance degradation under SC decoding, since the increase in the minimum distance is achieved by changing the set of frozen bits of the original polar code, which is optimal for the SC decoder. A modification of the set of frozen bits of the original polar code leads to a number of unreliable bitsubchannels being unfrozen.
The present disclosure focuses on the performance of polar codes under SCL decoding with various list sizes.
According to an embodiment of the present disclosure, a problem of optimization of precoded polar codes for SCL decoding is considered. To reduce the search space, precoding matrices with a specific structure are provided. A code design that minimizes a function of both SC and ML decoding error probabilities for shortlength codes is introduced afterwards.
Precoded Polar Codes
A method of representing any binary linear block code of length n = 2^{771} as a polar code with dynamic frozen symbols is described above. According to that method, a polar code with dynamic frozen symbols is specified by a freezing constraint matrix, which is derived from the check matrix. This code representation is used to perform decoding through SCL and its variations.
In general, one can also represent any linear block code C of length n = 2^{m} as a polar code precoded by a k x n matrix Z = G · M^{1}, where G is a generator matrix of code C. Such precoding matrix Z always exists, since polarization transformation matrix M is invertible. Note that one can
perform linear operations over rows of the precoding matrix Z without affecting code C, i.e. use any precoding matrices T · Z, where T is a k x k invertible matrix. Such codes are referred to as precoded polar codes, since their encoding process includes precoding followed by polar encoding. It is desirable to identify precoding matrices which minimize SCL decoding error probability. There is no analytical expression for SCL decoding error probability, so a search for (n, k, ³ d) precoded polar codes with low both SC and ML decoding error probabilities is proposed. The SC
decoding error probability P_{e} can be estimated as described above.
An estimate of ML decoding error probability P_{g} ^{ML} ^ can be obtained through simulations or as an upper/lower bound value. There are a number of methods to estimate the ML decoding error probability. Most parts of the bounds on the ML decoding error probability for linear block codes are functions of their weight enumerator polynomial (spectrum), where either all of the components of the code spectrum are employed or a part of the components of the code spectrum are employed (partial spectrum). According to one implementation, tangentialsphere upper bound is employed, which is one of the tightest bounds established for binary linear block codes and it is also based on the code spectrum. In most cases, codes are designed for sufficiently high SNRs. Code optimization for target P}^{ML} ^ in the range 10^{4} — 10^{2} is a focus, which corresponds to a relatively high SNR. In the high SNR regime, p}^{ML}^ is mostly dependent on the code minimum distance and to a less degree on precise values of non zero components of the code spectrum.
Since the number of ( n , k, > d) precoded polar codes is equal to the total number of ( n , k, > d) codes, it is computationally impossible to select an (n, k, > d ) precoded polar code
(SC^{'}) with the lowest SC decoding error probability P_{e} by performing a full search and computing pj^{sc)} for each code. However, one can at first select a set of frozen bits T. Recall that p}^{sc}^ depends only on the set of frozen bits and channel properties. But there is no guarantee that a code with the minimum distance at least d exists for the preselected T. This is in contrast to designing a pure polar code, where exactly one code exists for any given set T, length n = 2^{771} and SNR. Recall that in case of near ML decoding, low minimum distance of pure polar codes leads to inferior performance compared to extended BCH codes, quadratic residue codes and other codes with a high minimum distance.
In the present disclosure, it is assumed that the SNR is fixed to some specific value and that all codes are optimized for that SNR.
Polar Codes as Generalized Concatenated Codes
In some instances, any pure polar code can be represented as a generalized concatenated code (GCC). Since
the generator matrix G of an (n = 2^{m}, k) pure polar code can be represented as:
where Z is a fullrank k X n precoding matrix with at most one non zero element in each column. Thus, the overall polar code of length Z ^{771} is decomposed into a number of outer codes of length 2^{7711} and inner codes of length 2^{l}, where the outer and inner codes are also pure polar codes. Note that if / = 1, then the above expression corresponds to a Plotkin sum of two linear codes Co and Ci resulting in a linear code C given by:
If parameters of codes Co and Ci are (M/2, ko, do) and (M/2, ki, di ), respectively, then the resulting code C has parameters (M, k = ko + ki, d = min (do, 2 · di)).
The code performance could be improved by replacing outer polar codes with eBCH codes, ReedSolomon codes, convolutional codes, LDPC codes or any other codes with high error correction capability. The cost of better performance is the necessity to use appropriate decoders for outer codes, such as an ordered statistics decoder or a box and match decoder in case of eBCH codes, which lead to a substantial increase in the overall computational complexity of decoding.
SC Aimed Precoding Matrix
Consideration is given to designing a precoding matrix for a polar code with dynamic frozen bits assuming that a set of frozen bits T is already selected. The presence of a fixed set T restricts possible dependencies between input bits of the polarizing transformation. Dynamic
frozen bits are equal to linear combinations of bits with lower indices only. Instead of designing a code by optimizing dynamic freezing constraints, rows of the precoding matrix are optimized.
Generator matrix G is represented as a product of k x n precoding matrix Z and n x n polarization transformation matrix M, that is G = Z ^{■} M. By multiplying both sides by transposed check matrix H one obtains Z · M · H^{T} = 0. Recall that dynamic freezing constraint matrix V = Q · H ^{■} M^{T} and that Q is an (n  k) x (n  k ) invertible matrix, so Z ^{■} V^{T} = 0. Thus, precoding matrix Z provides an alternative definition for a polar code with dynamic frozen bits. A polar code with dynamic frozen bits may be specified by set T together with either freezing constraint matrix V or precoding matrix Z. The k X k identity matrix is denoted by I^{(}^{kxk}\
It can be shown that given dynamic freezing constraint matrix V, there exists such a precoding matrix Z that columns with indices from the set _{<}A form an identity matrix, i.e. Z_ ^ =
_{row} begins at such a position s(q) that {t 11 < s(q) & t 6 _{<}A]\ = q. Here qth row begins at sth position means that Z_{q s} = 1 and Z_{q §} = 0 for all s < s.
Precoding matrices Z satisfying this condition are referred to as SC aimed precoding matrices. It follows from this condition that it is sufficient to perform search only over SC aimed precoding matrices Z. The number of SC aimed precoding matrices Z depends on the selected set T. This can be illustrated by two extreme cases. In case of T = {0, ... , n — k — 1}, there is only one SC aimed precoding matrix Z, more precisely, Z
denotes a k x (n — k) zero matrix; a SC/SCL decoder processes at first all frozen bits and only then starts to make decisions on values of the information bits. The other extreme case is T = {k, ... , n — 1}. The number of precoding matrices in this case is equal to 2^{kx(nk)} and Z =
where D is an arbitrary k X (n — k) binary matrix. In general, bitsubchannels with higher indices have better reliabilities, so frozen sets with low P_{e}
are closer to the first extreme case and the number of SC aimed precoding matrices is moderate for code lengths 32 and 64, meaning that it is possible to perform a full search over these matrices.
Fixed and Variable Rows of Precoding Matrix
The structure of SC aimed precoding matrix Z is such that the ith row of generator matrix G = Z ^{■} M is given by a linear combination of the ith row of matrix M_{L} _ and some y^{'}th rows of
matrix M_{T} > i, where M_{J}J is a submatrix of matrix M given by rows with indices i 6 I and columns with indices j E J. Rows of M_{L} _
are referred to as information rows and frozen rows, respectively. Thus, all information rows of M with indices higher than ma x(T) are included into generator matrix G in the unaltered form. These rows will be referred to as fixed rows, since any SC aimed precoding matrix Z corresponding to T must include them. The number of fixed rows is equal to n — 1 — max(P). If the weight of at least one fixed row is lower than d, then it is impossible to construct such matrix G that the code minimum distance is at least d, and corresponding frozen set T should be dismissed. Recall that the minimum distance of a pure polar code is equal to the smallest row weight of its generator matrix M_{L} _ . The remaining nonfixed rows of the generator matrix G are referred to as variable rows.
Optimization Criteria
The precoded polar code design problem is a problem of constrained multi objective optimization, where a constraint is given by the desired minimum distance d, the first objective function is P and the second objective function is Pj,^{ML}\ Observe that construction of the whole Paretooptimal set is infeasible due to a large number of codes. The use of the weighted sum method to produce a single objective by weighting
cumbersome, since the relative importance of P_{e} ^{(i,c)} and P_{e} ^{(ML)} vary depending on the list size of the SCL decoder. This paper focuses on the SCL decoding with moderate list sizes and, therefore, we give the highest
(.sc' priority to the first objective P_{e )} .
The code design algorithm finds a code C of length n and dimension k satisfying the following criteria:
1) its minimum distance is at least d,
2) its frozen set f^{*} e F satisfies the condition
for all frozen sets/^{1} e F corresponding to ( n , k, ³ d) codes with (optional) subcode C^{~} and (optional) supercode C^{+};
3) (optional) its SC aimed precoding matrix Z^{*} satisfies condition for
generator matrices G of all ( n , k, ³ d) codes with frozen sets such that
4) (optional) its SC aimed precoding matrix Z^{*} satisfies for all rows z such that is a SC aimed precoding matrix
specifying an code with the same frozen set as
F is a set of preselected frozen sets/^{1} c U,
is an estimate of the maximumlikelihood decoding error probability computed for a linear block code with generator matrix G. The semicolon in stands for the vertical concatenation of two matrices.
is calculated as described above. As it was mentioned above, the tangentialsphere bound is used to obtain an estimate
o obtain numerical results for codes of length 32 and 64. As provided below, two optimization modes are considered:
• global, codes satisfy criteria 1, 2, 3 and automatically 4; and
• local, codes satisfy criteria 1, 2, 4.
In contrast to the local mode, the global optimization mode guarantees the compliance with the 3rd Criterion, which stands for finding the global minimum of over all ( n , k, ³ d) codes
with the lowest possible . Criteria 3 to 6 are marked as “optional”, since the criterion may not
be satisfied by the proposed code design algorithm. In case of the local mode, the 3rd Criterion is relaxed to reduce the computational complexity, resulting in the 4th Criterion. The proposed code design approach in the local mode is an instance of a greedy algorithm.
It should be recognized that the proposed approach in the both modes ensures the global minimum of
under the constraint on the minimum distance d, while words “local” and
“global” characterize minimization of According to our simulation results, the global mode
is suitable for all codes of lengths 32, and codes of length 64 with a low number of variable rows. Therefore, the local mode is employed for the remaining codes of length 64.
Optional Criteria 5 and 6 are used to reduce the computational complexity. Criterion 6 ensures fast computation of the minimum distance, which is required to meet Criterion 1. Criterion 5 ensures a reduction of the search space size, which is the space of all row candidates for the generator matrix. In case of precoded polar codes, the search for rows of the generator matrix is equivalent to the search for rows of precoding matrix.
Complexity of the proposed code design approach is dominated by the minimum distance computation. In case of codes of length up to 64, the exhaustive search over all codewords of a code C or its dual code may be performed to find the minimum distance and spectrum of code C. In case of codes of length higher than 64, codes are designed in accordance with Criterion 6, where a preselected super code C^{+} is used together with a list of all its codewords of weight up to (d\ ), Presence of this list ensures lowcomplexity computation of the minimum distance d ’ and partial spectrum of code C if C c C^{+} and d’<d.
According to one implementation, each of C and C^{+} is obtained using the proposed code design algorithm or given by the Plotkin sum of two codes of length M/2, where the two codes of length M/2 are constructed using the proposed code design algorithm and a permutation is optionally applied to codewords of the resulting code.
Example Algorithm
An example algorithm for the design of short length precoded polar codes that ensures a
(sc^ specified minimum distance and the lowest possible SC decoding error probability P_{e} is
(sc^ provided. This is a first algorithm that is able to minimize P_{e} for given code parameters (n, k, ³ d).
Variable rows of n x k precoding matrix Z are constructed one by one. A row Z_{L} _ specifies a corresponding row of generator matrix G_{t} 0 < i < k. Among all row candidates for parameters ( n, k, ³ d), eligible candidates are selected by checking the minimum distance of the code generated by G_{j}
The number of candidates for row Z_{L} _ is equal to N ^{J} (L) = 21'/ 1 ^{te:r&i}^{>s}<^{'i})} · it can be seen that N^{t}(i) is a nonincreasing function of i, since s(i) increases with i. Rows of Z_{j} _ are generated in decreasing order of their indices i to reduce the total number of row candidates to be considered, 0 < i < k.
A partially constructed matrix Z is referred to as a path. A score is assigned to each path, and pairs (path, score) are stored in a priority queue. Observe that it is sufficient to actually store only variable rows of paths constructed for the same frozen set T, since fixed rows are the same for these paths. At each iteration of the proposed algorithm a path with the best score is selected from the queue, and search for the next variable row is performed. A priority queue
(sc^ contains paths of various lengths. The score is a function of SC decoding error probability P_{e} estimate of the ML decoding error probability P_{g} ^{ML} ^ and the path length equal to the number of already selected rows, i.e. Score(path ) = As
it was mentioned above, the tangentialsphere bound (TSB) is employed to obtain estimate p}^{ML}^ for codes of length 32 and 64. The lowest value
is guaranteed by employing such score functions. During simulations, score functions for global and local optimization of P_{e} ^{(ML)} were considered. Here global optimization means finding global minimum of P_{e} ^{(ML)} among all (n, k, ³ d ) codes with the lowest possible P_{g}
. In case of global optimization, Score(path ) is used independent of RowNum(path). It is suitable for all codes of lengths 32 and some codes of length 64. In case of length 64 and a large number of variable rows, local optimization is used, where priority is given to longer paths, meaning that the best path in the priority queue is the path with the lowest value of P_{e} ^{(ML)} among paths with the highest value of RowNum(path ) and the lowest possible P_{e}
. In the case of local optimization, the proposed approach is an instance of a greedy algorithm. It should be recognized that in both considered cases, the proposed approach
(sc^ ensures global minimum of P_{e} , while “local optimization” and “global optimization” are related only to P}^{ML}\
A disclosed optimization algorithm is summarized by Algorithm 1, shown below. It computes k x n precoding matrix Z to specify a precoded polar code of length n, dimension k and minimum distance at least d. A set F of preselected frozen sets is an additional input argument, which reduces the search space. A code being generated is allowed to have only frozen sets belonging to F. If there is at least one fruitful T 6 F, then the SCML optimization algorithm will return precoding matrix Z, otherwise an empty queue will be observed at some iteration. A frozen set T is referred to as fruitfiil for code length n and minimum distance d, if there exists an
( n , k, ³ d) code corresponding to T, where , and as barren, otherwise. It is sufficient
to take set F consisting of all sets of frozen bits with sufficiently low
Lines 3  7 describe initialization of fixed rows of precoding matrix, while lines 8  23 describe the search for variable rows. Fixed rows of precoding matrix Z are uniquely defined by a set of frozen bits. More precisely, s
consisting of the highest indices of frozen bits, is used to identify all possible fixed parts of
for frozen sets from F on line 5. Only barren frozen sets T may be affected by intersection of
on line 4. Observe that corresponding rows of the generator matrix are given by
_. Queue Q is initialized by paths given by partially constructed SC aimed precoding matrices Z together with corresponding scores. As described above, Score
( ) is a function of SC decoding error probability \ estimate for
ML decoding error probability and the number of selected rows.
is computed for an
AWGN channel by using a method based on a Gaussian approximation.
At each iteration of while loop function GetBest returns path with the lowest score value, that is the lowest error probability estimate. The search for new variable rows continues till a fully initialized path is extracted from the queue on line 9. A path Z is fully initialized if it consists of k rows (line 10), where k is the required code dimension. In this case, the path represents a precoding matrix Z (line 11).
Possible extensions for path Z are considered on lines 13 21, which are possible ways to increase the code dimension by one. Function GetFrozenSet returns frozen set T corresponding to the code specified by SC aimed precoding matrix Z . According to the structure of a SC aimed precoding matrix, 1 s of a variable row correspond to a single information row of M and a number of frozen rows with higher indices. Here B is a set of the corresponding information row indices. Set B consists of the highest values among indices of not yet utilized information rows of matrix M. Frozen set T and each b 6 B specify a number of possible row candidates z for the precoding matrix. It can be seen that the index of information row decreases with the increasing number of rows of precoding matrix Z .
Possible row candidates z 6 z for the precoding matrix are considered on lines 17  20, where the set of row candidates for 7 and each b is given by all binary vectors with a fixed zero subvector and 1 on the position b. The number of such candidates is equal to
F unction Min Distance on line 18 returns the code minimum distance, where the code is specified by generator matrix Z · M, and optionally, a supercode with a list of its lowweight codewords. If the code minimum distance is at least d, then candidate z specifies an eligible row and the corresponding precoding matrix is stored in the queue on line 19.
Obtained precoding matrix Z may be further used to calculate generator matrix G < Z ^{■} M. For decoding purposes, the code is represented as a polar code with dynamic frozen bits.
If the set of frozen sets F is given by a single 7, then the SCML optimization algorithm simplifies, since each of sets G and B also consists of a single element. If the input frozen set is barren, then an empty queue, i.e. Q = 0, would be observed at some iteration. One can arrange
(SC^{^}) frozen sets 7 in ascending order of P_{e} , and call the SCML optimization algorithm individually for each 7 till a precoding matrix Z is obtained on line 11, meaning that a fruitful frozen set is discovered.
The Score
The score computation is summarized by Algorithm 2. Function Score(Z, F) receives a partially constructed precoding matrix Z and a set of frozen sets F, where Z specifies a subcode of the desired (n, k, ³ d) precoded polar code. A macro parameter mode distinguishes between global and local optimization modes defined above. Frozen set F defined by Z is used to obtain an estimate of the SC decoding error probability for (n, k, > d) code being
designed. On line 3, P_{e} (/·) is calculated according to expression (3), where we use Gaussian approximation of the density evolution to compute probabilities
for an AWGN channel. The ML decoding error probability is estimated on line 4 as follows. The spectrum is computed for a code specified by generator matrix Z ·M. According to one implementation, desired estimate
is next obtained by applying the tangentialsphere bound of Poltyrev, where the recently computed spectrum and code parameters (< n , k) are used as input arguments.
Algorithm 2 returns a score given by an ordered pair (p ^{sc}{
in case of the global mode (line 5) or an ordered triplet
in case of the local mode (line 6). Scores and corresponding paths are stored in a priority queue by Algorithm 1. Sorting of scores in the priority queue is performed according to the lexicographical ordering, which is defined for ordered pairs by (ao, ai) < {bo, bi) iff ( ao < bo) or ( ao = bo) &
Algorithm 1 employs function GetBest, which should choose a path with the best score from the priority queue. We define the “best” score as a score preceding all other scores according to the lexicographical ordering, where the score is given by an ordered pair or an ordered triplet. Therefore, in case of the global optimization mode, function GetBest returns a path with the lowest value of
among paths with the lowest In case of the local optimization mode, the
priority is given to longer paths, meaning that GetBest returns a path with the lowest value of
among paths with the highest path length n  among paths with the lowest
. Here path
length is equal to the number of rows of matrix Z.
Preselected Sets of Frozen Bits
As described above, it was assumed that input set F of frozen sets T is known. Let
be a set consisting of all sets with SC decoding error probability
lower than a certain threshold T. If T is sufficiently high, then (n, k, ³ d) precoded polar code will be found by the Algorithm 1.
As described above, the SC decoding error probability for a polar code with information set <A is given by
_{■} There are a number of ways to find all such subsets <A of cardinality k in a set of cardinality n that condition is satisfied. No
particular algorithm is specified, since it does not affect the result.
There are a lot of frozen sets with similar values of , so the cardinality of set
is large. Let be a fruitful frozen set with the lowest That is, there exists an (n, k, ³ d )
code for and p is as low as possible.
It is desirable to identify frozen set among sets
beforehand, i.e. prior to the
search for variable rows. This is equivalent to excluding from consideration all barren frozen sets.
Note that in a case of code generation for all values of dimension k, barren frozen sets for ( n , K < k, ³ d) code are available prior to the generation of ( n , k, ³ d) code.
Thus, given a set B ^{(n,k_1,³d)}, all frozen sets T: T a f e U^{k}7i B^{(n,K,³d)} may be excluded from consideration during the design of (n, k, ³ d ) code. Note that T^{*} may vary with SNR, while the set B ^{(n,k,³d )} is independent from SNR and is determined by the polarizing transformation.
Complexity Reduction: Suboptimal Approach
Complexity of the proposed code design approach is acceptable for all codes of length 32. However, the search space is excessively large in case of a length that is greater than 32.
One of possible ways for lowcomplexity generation of a (n, k, ³ d ) code is to construct at first a (n, k < k, d ³ d) code and then generate k — k additional rows for its SC aimed precoding matrix. That is, to construct (n, k, ³ d ) with preselected (n, k, d ) subcode. This approach shows promising results even the existence of these k — k additional rows is uncertain.
Encoding Example
Fig. 4 shows an example of encoding process for (64, 25, 16) SCML precoded polar code, in accordance with the channel encoder illustrated by Fig. 3. The (64, 25, 16) SCML precoded polar code is generated by the SCML optimization algorithm (Algorithm 1), and the obtained code specification is further stored within channel encoder. Encoding of data 401 begins with splitting the data 401 (step 402) into information vectors 403 (a_{0}, . . a_{24}). Encoding step 405 is parameterized by the code selector, which provides parameters 404: code length n=64, dimension k=25, minimum distances d=16, and precoding matrix Z. Encoding with the precoding matrix Z in step 405 results in an input vector (ii_{Q}, . . . , ¾_{63}) 406, which consists mostly of information bits a_{L} and zeros and includes only a few sums of information bits, since precoding matrices generated by the SCML optimization algorithm are extremely sparse. The input vector 406 is further encoded with the polarization transformation matrix (shown in step) 407 to obtain a codeword 408.
The codeword 408 is modulated and transmitted by other components of a transmission system as illustrated by Fig. 1.
Numerical Results
Results are provided for the case of an AWGN channel with BPSK modulation, according to an embodiment of the present disclosure. Codes of length 32 and 64 are considered. Precoded polar codes constructed by the proposed SCML optimization algorithm are referred to as SCML precoded polar codes. A comparison is provided with the original polar codes and polar subcodes of eBCH codes. The original polar codes are constructed by freezing n — k worst bit subchannels; polar subcodes of eBCH are obtained from narrowsense primitive eBCH codes by freezing a number of worst bitsubchannels. Note that a comparison with eBCH and their polar subcodes is provided, since in case of short code lengths, eBCH codes are one of the most powerful classes of binary linear codes with high minimum distances. Since binary narrowsense primitive eBCH codes exist only for some values of the code dimension, e.g. k = 6, 11, 16, 21, 26 for the code length n = 32 and k = 7, 10, 16, 18, 24, 30, 36, 39, 45, 51, 57 for the code length n = 64, codes of the present invention are compared with polar subcodes of e BCH codes in case of intermediate values of the code dimension.
Tables I and II show performance estimates P}^{sc} ^ and P_{e} ^{(ML)} characterizing SC decoding
(SC^{^}) error probability and ML decoding error probability, respectively, where P_{e} is computed according to the expression as described above using a Gaussian approximation and p}^{ML}^ is given by tangentialsphere bound according to a known art as described above. Estimates in Table I are provided for code length n = 32 and signaltonoise ratio — = 4.75 dB. Estimates in Table II
N_{0} are provided for code length n = 64 and — = 3.75 dB. Values of — are selected so that FER
N_{0} N_{0} estimates are in range 10^{4} — 10^{2}.
Columns n, k and d specify code length, dimension and minimum distance, respectively. The dimension changes with a step of 1. Tables I and II also show the lower and the upper bounds
TABLE I
PERFORMANCE ESTIMATES FOR CODES OF LENGTH N= 32 CONSTRUCTED FOR
on the minimum distances of binary linear block codes provided by the codetables. A code with the minimum distance meeting the upper bound is called optimal, while a code with the minimum distance meeting the lower bound is called bestknown. It can be seen that in most cases, the proposed codes, as well as polar subcodes of eBCH, have the minimum distance of the best known codes.
Only ten values of k are shown for n = 32, since in the case of other k, values of d, p}^{sc}^ and P_{g} ^{ML} ^ coincide for the considered 3 types of codes. Note that almost in all cases, the minimum distance of the proposed SCML precoded polar codes coincide with the minimum distance of polar subcodes of eBCH, while the proposed codes provide a gain in the minimum distance for k = 25 and k = 26. It can be seen that the proposed code construction substantially outperforms polar subcodes of eBCH in terms of P}^{sc} ^ without sacrificing P_{e} ^{(ML)} . In case of n = 32, the weight enumerator polynomials coincide, so the estimates p}^{ML}^ are the same. In case of n = 64, the weight enumerator polynomials coincide occasionally. Difference between them is characterized by the relative number of low weight codewords provided in Table II, where A p_{rop} [w] and A_{eBCH}[w] are the numbers of codewords of weight w in the proposed code and the polar subcode of eBCH code, respectively. So, in case of length n = 64, the proposed SCML precoded polar codes and polar subcodes of eBCH codes demonstrate the same or similar P_{g} ^{ML}\
TABLE II
PERFORMANCE ESTIMATES FOR CODES OF LENGTH n = 64 CONSTRUCTED FOR— = 3.75
The code construction as disclosed in the present disclosure enables one to balance the tradeoff between FER under SC decoding and ML decoding by selecting appropriate value for the target minimum distance. The higher list size of SCLlike decoder, the higher minimum distance should be selected. Simulation results for codes with the minimum distances similar to eBCH codes are provided to enable comparison with eBCH codes and their polar subcodes.
Two score functions were considered as described above. Both of them guarantee a global
(SC^{^}) minimum of SC decoding error probability P_{e} . A score function ensuring a global minimum of
P_{g} ^{Mi)} for a given frozen set T was employed for all codes of length n = 32 and codes of length n = 64 with 16 values of k. For n = 64 and remaining values of k, a score function corresponding to local minimization of p}^{ML}^ (greedy approach) was employed. According to obtained results, if one considers all existing codes with parameters (n, k, ³ d) corresponding the best frozen set T^{*}, then these codes have the same
r only slightly different. One might expect the same values
be found by a greedy approach as in the case of global minimization.
An (n, k, ³ d ) SCML precoded polar code is (n, k, ³ d ) binary linear block code, which demonstrates the lowest possible SC decoding error probability. One may be also interested in minimizing t the price of slight increase in P^^{sc}\ By considering extended set of frozen sets T in the Algorithm 1, in addition to results of Table I, a (32, 17, 6) code was also found with P_{e} ^{(sc)} = 0.0109 and P_{e} ^{(Mi)} = 0.00039; and (32, 18, 6) codes with P_{e} ^{(sc)} = 0.0158 and P_{e} ^{(Mi)} = 0.00048. According to the simulation results,
SCML precoded polar codes may be improved only negligibly, so results for precoded polar codes with the second, third and etc. lowest values of P}^{sc} ^ are not necessarily presented.
It is understood that many eBCH codes have the highest possible minimum distance, i.e. there are many optimal/bestknown linear block codes among eBCH codes. The eBCH codes include ReedMuller codes as their subcodes, while any ReedMuller code may be represented as a pure polar code. Thus, according to the simulation results for codes of length 32, the obtained SCML precoded polar codes include ReedMuller codes as their subcodes. The simulation results also confirmed the same property for the SCML precoded polar codes of the length 64 and the dimensions 8, 11, 12, 19, 20, 40, while the computational complexity is too high for the other
dimensions. Parameters of ReedMuller codes are as follows. In case of length 32, the SCML precoded polar codes with the minimum distances 12 and 6 contain (32, 6, 16) and (32, 16, 8) ReedMuller codes, respectively. In case of length 64, the SCML precoded polar codes with the minimum distances higher than 16 and higher than 8 contain (64, 7, 32) and (64, 22, 16) Reed Muller codes, respectively. One can assume that the same property holds also for the SCML precoded polar codes of length 64 and the remaining dimensions. Thus, to reduce computational complexity of the code design for length n = 64, search is performed only for precoded polar codes, which include ReedMuller codes as their subcodes. Computational complexity for parameters (64, 17, 24) and (64, 27, 14) was further reduced by designing codes with preselected (64, 16, 24) and (64, 26, 14) subcodes, respectively. As described above, in a general case, SC ML precoded polar codes with preselected subcodes are suboptimal. Results for codes of length 64 and dimensions 18 and 35  39 are not necessarily presented due to excessively high computational complexity. The SCML precoded polar codes and polar subcodes of eBCH codes with the dimension lower than 8 or higher than 51 result in pure polar codes, so there is no interest in results for them.
Figs. 5 and 6 present simulation results illustrating the performance in terms of FER for wide range of the energy per bit to noise power spectral density ratio is a normalized
SNR measure, known as the SNR per bit. Decoding of all codes was performed by using a stack successive cancellation algorithm, which was shown to have a lower computational complexity than the TalVardy SCL decoding algorithm, at the cost of a slightly inferior performance. Results are provided for the original polar codes, polar subcodes of eBCH and SCML precoded polar codes of length n = 64 and dimensions k = 25 and k = 34 under sequential decoding with list size L = 4, 8, 16, 32, 64. Minimum distances of the codes are provided in the Table II. The SC ML precoded polar codes outperform the original polar codes starting from L = 4 in the highSNR region. It can be seen that the proposed codes significantly outperform polar subcodes of eBCH up to L = 16, then the performance gets equal for higher L. The gain is ensured by the reduced SC decoding error probability (see Tables I and II). It should be understood that the SCML precoded polar codes may be also exploited by an adaptive SCL decoder, which progressively increases the list size L until a packet is successively decoded or a maximum L is reached. In a case of dimension k = 25, the code demonstrates an additional gain owing to a higher minimum distance. In case of
dimension k = 34, the codes have the same minimum distances and slightly different weight enumerator polynomials. SCL decoding with an infinite list size corresponds to ML decoding. Sequential decoding with L = 32 approaches near ML decoding for both (64, 25) and (64, 34) codes. It can be seen that list size L = 64 provides a negligible performance improvement compared to L = 32. Thus, there is no need to further increase list size.
It should be understood that in the case of the original polar codes, curves coincide even for L = 4 and L = 8, and higher values of L do not provide any gain.
Decoding complexity
Performance and decoding complexity were investigated using software implementation in the C++ programming language. Results are provided for the case of an AWGN channel with BPSK modulation. Decoding complexity is measured as the average number of iterations required to perform decoding of a polar code using the SCL decoding algorithm or its variation. For example, decoding of a polar code of length n using the SC requires n iterations, while the SCL with list size L requires L ^{■} n iterations. Fig. 7 and 8 illustrate results for (64, 25) and (64, 34) codes, respectively, under the stack SC decoding algorithm with list size L according to a known art. Fig. 9 and 10 illustrate results for (64, 25) and (64, 34) codes, respectively, under the sequential decoding algorithm with list size L according to a known art. The sequential decoding algorithm reduces decoding complexity compared to the stack SC at the cost of negligible performance degradation. Figs. 710 show that decoding of the SCML precoded polar codes requires a lower number of iterations than polar subcodes of eBCH codes, and that the sequential algorithm reduces the average number of iterations. The stack SC and sequential decoding involve operations over floating point numbers and binary number, as well as operations related to priority queue. The decoding complexity is dominated by the floating point operations (FLOP). An iteration of the stack SC involves the following FLOP: multiplications, additions and comparisons, while an iteration of the sequential algorithm involves only additions and comparisons. The total number of FLOPs is lower for the sequential algorithm compared to the stack SC.
It should be understood that encoding complexity of precoded polar code is given by the sum of encoding with precoding matrix and encoding with polarization transformation matrix. Multiplication of a binary vector by the nxn polarization transformation matrix involves ndog2(n)
binary operations, where n is the code length. The precoding matrix is extremely sparse, therefore multiplication by the precoding matrix leads to minor increase in encoding complexity.
According to an embodiment of the present disclosure, a code design algorithm configured to minimize both successive cancellation (SC) and maximumlikelihood (ML) decoding error probabilities are introduced. The algorithm performs optimization of a precoding matrix for the polarization transformation matrix to guarantee preselected minimum distance and the lowest possible SC decoding error probability to the resulting code. Constructed short length precoded polar codes, decoded as polar codes with dynamic frozen bits, demonstrate a significant advantage compared to the original polar codes, extended BoseChaudhuriHocquenghem (BCH) codes and their polar subcodes under SCLlike decoding.
According to an embodiment of the present disclosure, a SCML algorithm was introduced to generate ( n = 2^{m}, k, ³ d) precoded polar codes, which have the lowest possible SC decoding error probability under the constraint on the minimum distance specified by the preselected value d. Constructed precoded polar codes are decoded as polar codes with dynamic frozen bits. Experimental results show that the proposed codes of lengths 32 and 64 significantly reduce the FER compared to the original polar codes under the stack/list SC decoding starting with the list size 4. The gain increases with the list size. The codes have a much lower SC decoding error probability than eBCH codes and their polar subcodes without sacrificing the error probability under maximumlikelihood decoding. In case of the code length 64, the constructed precoded polar codes demonstrate a significant advantage compared to the polar subcodes of eBCH codes under stack/list SC decoding with the list size up to 16 and demonstrate comparable performance starting with the list size 32.
According to an embodiment of the present disclosure, the SCML precoded polar codes may be used individually, as well as component codes in some concatenated constructions. For example, they may be employed to design polar codes with large kernels. Code length of the SC ML precoded polar codes with 2x2 kernel can be reduced by an arbitrary number of bits by applying puncturing or shortening techniques. Alternatively, the SCML optimization algorithm can be employed to design precoded multikernel polar codes, where 2x2, 3x3 and 5x5 kernels are used to obtain codes of lengths 15, 16, 18, 20, 24, 27, 30, 32, 36, 40, 45, 48, 54, 60, 64 and so on.
According to an embodiment of the present disclosure, the SCML precoded polar codes can be represented as a concatenation of an outer code and an inner polar code. The outer code is specified by the precoding matrix. The joint optimization of the precoding matrix together with the set of frozen bits is performed to minimize both SC decoding error probability and ML decoding error probability.
According to an embodiment of the present disclosure, the SCML precoded polar codes may be employed in various data transmission systems and storage systems. For example, they may be employed to channel coding for 5G and beyond 5G networks with a focus on ultrareliable lowlatency communications (URLLC), data encoding method for Redundant Arrays of Independent Disks (RAID) and for Flash memory, satellite communications, digital television and Wireless Local Area Network (WLAN).
Example Algorithm for Polar Codes of Moderate Lengths
In some embodiments, a method may be used to recursively construct a collection of precoded polar codes of moderate lengths, e.g. 128 and 256. The proposed code design method determines a collection C of precoded polar codes of various lengths, where each (n, k) code C^{*} 6 C satisfies the following criteria:
1) The minimum distance of C^{*} is at least D_{n k} ;
2) There exists a supercode C^{+} of code C^{*}, i.e., C^{+} 3 C^{*}, such that a) The number of codewords of weight below D_{n k} in code C^{+} is less than a specified limit k_{max}; and b) Either C^{+} has rate 1 and n < log_{2}(A_{max}), or C^{+} can be represented as the Plotkin sum of two nested codes of length n/2 belonging to C;
3) Either k < T_{max}, or there exists a subcode C^{~} of code C^{*}, i.e. C^{~} a C^{*}, such that a) C^{~} 6 C; and b) The dimension of code C^{~} is at least k — T_{max};
4) Frozen set optimality: frozen set T^{*} of C^{*} satisfies the condition
Pjf^{c)} (T) for all frozen sets T corresponding to ( n , k, ³ D_{n k}) codes with subcode C^{~} and supercode C^{+};
5) Code C^{*} satisfying Criteria 14 can be constructed with the computational complexity limited by ip_{ma}x, assuming that C^{~} and C^{+} are known; and
6) Code C^{*} ensures the lowest number of minimum weight codewords among a group of z_{ghac} codes generated in accordance with Criteria 15.
The minimum distance D_{n k} is selected prior to the code design, and resource limits T_{max}, A_{max}, ip_{ma}x ^{ar}d z_{thac} have constant values.
It can be seen from Criteria 16 that the code design problem is a constrained optimization problem. Since the frame error rate (FER) for the SCL decoding cannot be calculated analytically,
~(SC^{'}) the main objective function is FER estimate for the SC decoding P_{e} (see Criterion 4), which is
(.sc' calculated in the same way as P_{e )} above. In addition, minimization of the number of minimum weight codewords is specified by Criterion 6. Criterion 1 provides a constraint on the minimum distance. An appropriately selected value of the minimum distance provides a tradeoff between the SC and ML performance.
The proposed choice of supercodes and subcodes leads to a recursive structure and many nested codes in the resulting code collection C. It is known that the original polar codes are naturally nested. In Criterion 3, a proper difference between dimensions of C^{*} and C^{~} provides significant reduction in the code design complexity, and sufficient freedom to achieve the required
~(sc' minimum distance and low P_{e )} . Criterion 5 specifies a limit on the acceptable computational complexity. Criterion 2 emerges due to a similarity between the polar code structure and the Plotkin sum, as well as an opportunity to enumerate lowweight codewords, in accordance with Proposition 1 below.
This section provides a detailed description of the proposed code design method. The codes are constructed recursively, i.e. codes of length n = 2^{m} are based on codes of length 2^{m_1}, where codes of length 2^{7711} are based on codes of length 2^{m~2} and etc. As a starting point, m = 4 is used, since the number of codewords in any binary code of length 16 is low.
The proposed code approach is summarized by Algorithms 36 and Proposition 1 discussed below. Each code C is characterized by a quartet (Z, d, o)_{LasL}, A), where Z is the SC aimed precoding matrix, d is the minimum distance, L is a list of all codewords of the weight lower than (Oi_{ast} Note that code C is fully specified by Z, while other elements of the quartet are needed for code design purposes. Given code C, elements of the corresponding quartet (Z, d, w_{iast}, A) are addressed as Z (C), d(C), w_{last}(C), and A(C). Similarly, the length, dimension and frozen set of code C are addressed as n(C), k(C) and T(C), respectively.
1) The Main Loop: The main constructive steps are performed by function CodeCollection depicted in Algorithm 3 below. It generates a collection of ( n = 2^{m}, k, d = D_{n k}j precoded polar codes for 1 < k < n and n_{init} < n < n_{max} where the initial length n_{mit}, the maximum length n_{max} and the minimum distances D_{n k} are selected prior to code design together with resource limits k_{max}, T_{max}, ip_{max}, z_{piac} The resource limits restrict memory consumption and computational complexity.
Lines 311 of Algorithm 3 below correspond to designing codes of initial length n_{init}. A supercode is used to construct these codes of length n_{init}, and thus precoding matrix Z^{+} is initialized by identity matrix at line 5 and list A^{+} of codewords of C^{+} is given by all binary vectors of length n_{init}. The absence of subcode C^{~} is indicated by precoding matrix Z^{~} = 0 at line 4. A family of codes of length n_{init}is initialized at line 8, and then enlarged at line 10, where additional precoded polar codes are generated by function Precoding depicted in Algorithm 4 below.
Codes of length n = 2 ^{■} n_{init}, ... , n_{max} are generated at lines 1234. First, a set C^{I_Q)} of codes being equal to the Plotkin sum of codes from Cn is computed at lines 1418, where pairs of codes from Cn are selected by function SelectOuterCodes and concatenated by function Plotkin described below. Second, new (n, k, D_{n k}) codes are iteratively introduced into set C « at lines 1933. A set C^{+} of possible supercodes is formed at line 20, where a check is performed that the dimension of each supercode is at least k and that the supercode codeword list contains all codewords of weight up to D_{n k}  1.
A set C of subcodes corresponding to a particular supercode
is constructed at line 23, where the compliance with the minimum distance and nested conditions is ensured, and the dimension of subcodes is lower bounded by to limit the computational complexity
of function Precoding. Note that set Cn includes
as an element, which is needed to consider the case of absent subcode. The selection of a subcode with the lowest dimension at line 26 corresponds to maximization of the number of precoding matrix rows to be generated by function Precoding, i.e. a high degree of freedom at the cost of a high computational complexity.
At most one ( n , k, D_{n k}) code C is constructed for each supercode C^{~} at lines 2429, where attempts to generate code C are performed by function Precoding. The obtained codes C are united into a set C at line 30, and a single code minimizing SC decoding error probability is selected at line 32.
2) Increasing the Code Dimension: Function Precoding of Algorithm 4 generates an (n, k, d) code C for given ( n, k^{~} < k, d^{~} ³ d) subcode C^{~} and ( n, k^{+} ³ k, d^{+} ) supercode C^{+}, C^{~} a C^{+}. If code C does not exist or the code design complexity is prohibitively high, then a failure to generate a code is declared by returning an empty set at line 35. The computational complexity is measured as the number ip of iterations of the while loop, and it is limited by d_{max}, which is provided as an input argument. Note that ip also gives the number of calls of GetEligibleRows, which is the most computationally expensive function in Algorithm 4. At line 4, k^{+} x n precoding matrix Z^{+} of code C^{+} is decomposed into the vertical concatenation (S; Z^{~}) of k^{~} x n precoding matrix Z^{~} of code C^{~} and (/c^{+} — k^{~}) x n matrix S. Note that a such decomposition always exists, since C^{~} c C^{+}. Obtained matrix S is transformed into the SC aimed precoding matrix by applying Gaussian elimination at line 5. Matrix Z^{~} gives k^{~} rows of the precoding matrix for code C, while k — k^{~} additional rows of Z are generated as linear combinations of rows of S. A set H of noneligible linear combinations is constructed from a list A of supercode codewords with the weight below d. More specifically, function ExtractNonEquivPrecodingRows at line 8 returns set R consisting of all distinct binary vectors u' of length (/c^{+} — k^{~} ) satisfying the following condition:
Note that the cardinality of set R is much less than of set L, and the difference increases with k. Obviously, the cardinality of R is upper bounded by
.
At lines 1433 of Algorithm 4, the additional rows Z of precoding matrix are
constructed, where the rows are generated in reverse order of their indices to reduce the code design complexity. Matrix Z' , together with information index β of the next row, is referred to as a path. Paths and their scores are kept in a priority queue Q.
Lines 912 specify initialization of Q by paths (0, b ) with their respective scores P for all b 6
is the information set corresponding to a code defined by precoding matrix S. Information set c/Z® is constructed by function GetInformationSet(S) at line 6. Note that any code is characterized by a single information set, as well as a single frozen set.
The path score is essentially an ordered pair. A path with the lexicographically lowest score is identified by function PopMin, called at line 15. The lexicographical ordering for ordered pairs is defined by (a_{0}, af) < (b_{0}, bf) iff
The score for a partially constructed precoding matrix Z' and parameter b is computed by function Score as an ordered pair (p^{<:sc}\ IT'D, where set T is obtained from the frozen set of precoding matrix Z' by excluding b, and p^{(sc>} is an estimate of FER under the SC decoding for an (n, k, d) code computed as:
above. Therefore, function PopMin explores paths minimizing pG°^{)} and selects among them the longest path to ensure a fast convergence of the function Precoding.
The search for new rows continues until z_{ma}x matrices Z' consisting of k — k^{~} rows are found, i.e. z_{ma}x matrices Z' pass if condition at line 21 , or the condition of the while loop is violated at line 14. Note that (_{max} is provided as an input argument to function Precoding. At line 21, function RowNum returns the number of rows in a given matrix. RowNum(Z^{~} ) is always equal to k^{~}. At line 22, a (/c — k^{~} ) x n matrix Z' is vertically concatenated with Z^{~} and Gaussian elimination is further applied to obtain SC aimed precoding matrix Z. New matrix Z is added to the set Z of candidate precoding matrices. As soon as the cardinality of set Z reaches z_{ma}x, the function SelectSingleCode, shown above in Algorithm 5, is called to identify a single resulting (n, k, d) code. The resulting code may be alternatively selected at line 34 from set Z of a smaller size, which happens in case of excessively high computational complexity or empty priority queue.
A path (Z, /?) is extended at lines 1632. The path extensions are constructed in accordance with the structure of SC aimed precoding matrix. Therefore, the first “1” in a row of Z corresponds to an information row of A^{®m}, and the remaining “l”s correspond to frozen rows of A^{®m} with
higher indices, where the zth row of is referred to as information row if
GetInformationSet
and frozen row otherwise. Obviously, the SC aimed structure is not violated at the first iteration of the while loop since
. The function GetEligibleRows at line 17 generates a set z of possible new rows z satisfying the following conditions:
2) Row z is a linear combination of rows of S; and
3) Any linear combination of rows of (z; Z ) does not belong to 72.
The first condition ensures the SC aimed structure, defined by the equation above, to the matrix . The second condition guarantees that the resulting code is a subcode of C^{+}. The third
condition is needed to achieve minimum distance d. It can be seen that precoding matrix Z' is characterized by information set constructed at line 16.
The next row, which could be appended to matrix Z' at subsequent iterations of the while loop, should begin at a column with index b lower than <A
to preserve the structure of the SC aimed precoding matrix. It is also required that
\ since the desirable code must be a subcode of C^{+}. All possible such indices b are united into set Έ at line 26. Path extended by
(z, b) is saved into the priority queue at line 29.
Function SelectSingleCode(Z, d, wf_{ast}, A^{+}), depicted in Algorithm 5, selects a code with precoding matrix Z^{*} 6 Z such that it minimizes the number of codewords of weight d, i.e. the number of minimum weight codewords. At first, list A^ of supercode codewords of weight d is extracted at line 3. Then, codewords of weight d, belonging to the code defined by Z 6 Z, are identified using check matrix if constructed at lines 67. Desirable precoding matrix Z^{*} is gradually improved at line 10. As soon as the final Z^{*} is found, a whole list A^{*} of lowweight codewords is computed at lines 1314 and function SelectSingleCode successfully completes its work by returning a full specification of the resulting code.
3) Doubling the Code Length: This Section describes functions SelectOuterCodes and Plotkin called at lines 14 and 17 of Algorithm 3, respectively. Function Plotkin employs the Plotkin sum (6) to obtain code C of length n = 2^{771} based on codes
a
C^. The Plotkin sum is applied in accordance with the GCC representation of precoded polar codes. Let Z® and Z^{(1)} be precoding matrices of codes
and C^{(1)}, respectively. Then, the precoding matrix of code C can be computed as
since the generator matrix of the Plotkin sum of C® and
is given by
Note that Z® ^{■} ^^{®}!^{7711)}, Z^{<}'^{1>' ■}
and z ^{■} A^{®m} are the generator matrices of C
^{>'} and
C, respectively. Lowweight codewords of the resulting code C are enumerated in accordance with Proposition 1.
Note that a known method to enumerate the minimum weight codewords of polar codes using the Plotkin sum has been recently proposed. As opposed to these known methods, Proposition 1 of this paper enables a processor to explicitly enumerate not only the minimum weight codewords, but all codewords of weight up to a certain limit. Moreover, the known methods assume that the component codes of the Plotkin sum are pure polar codes, while Proposition 1 disclosed herein requires only that the component codes are nested.
Proof: It is shown that any codeword
of the weight lower than w_{last} = can be represented via elements
Note that the zero codeword
is included into
. According to the equation above, codeword c can be represented
can be seen that
Since last, one obtains meaning that
®. Besides, it
follows from In the former case , while in the latter case Note that \ since
. This concludes the proof.
Function Plotkin, called at line 17 of Algorithm 3, returns quartet for the
code C, where matrix Z is calculated according to equation above,
, and A is computed in accordance with Proposition 1.
Function Plotkin generates all distinct codewords c satisfying Proposition 1 to produce a list L of lowweight codewords for code C. Whenever the cardinality of L approaches macro parameter is decremented and then and
are updated accordingly to include only codewords of weight below
respectively. Therefore, the amount of memory required to store lowweight codewords of C is upper bounded by bits. The minimum distance of C is computed as described above.
Input data for function Plotkin is provided by function SelectOuterCodes, which is depicted in Algorithm 6. Specifically, function SelectOuterCodes selects a subset where
is a family of codes of length n/2. Pairs of component codes for the Plotkin sum with 2
parameters {n, k) are selected independently for all k = 1, ..., n. Code pairs for each particular k are considered at lines 57 of Algorithm 6. At line 5, a set of all pairs of nested component
codes with the overall dimension k is selected. As shown above, the minimum distance of the Plotkin sum of codes C® and C^^{1)} can be computed as min (d(C®), 2 ^{■} d(c60) In accordance with this rule, a set of code pairs, maximizing the minimum distance of the Plotkin sum, is identified at line 6 is further employed at line 7 to obtain
minimizing the SC
decoding error probability estimate where notation
The resulting set of selected code pairs for
all k is provided as the output at line 9.
Numerical Results for the Example Algorithm for Polar Codes of Moderate Lengths
Binary phaseshift keying (BPSK) modulation over AWGN channel has been used to investigate the code performance discussed above. Precoded polar codes of lengths 128 and 256 are constructed as described above with macro parameters:
. The eBCH polar subcodes and polar codes with CRC from 5G New
Radio standard are used for comparison. An Uplink scenario was considered for 5G New Radio polar codes, where length of CRC code is 11. Polar codes with CRC constructed for a downlink scenario demonstrate poor performance, since CRC length 24 is too high for polar code lengths 128 and 256. The rate of polar codes with CRC was computed as the ratio of the difference between the polar code dimension and CRC length to the polar code length.
The eBCH polar subcodes are constructed as described above. Note that eBCH codes with different minimum distances could be employed when designing ( n , k ) polar subcodes. E BCH codes were selected minimizing FER of the resulting polar subcodes under SCL with list size L = 64.
Figs. 11 and 12 illustrate the FER performance of the proposed codes and eBCH subcodes listed in Table III and 5G New Radio polar codes with CRC. Values of the minimum distance d for the proposed codes have been chosen for SCL decoding with L=64 by considering that increasing d leads to decreasing ML decoding error probability at the expense of increasing SC decoding error probability. Minimum distances of the original polar codes are provided in Table III for a comparison. Decoding of all codes was performed using the SCL algorithm. It can be seen from Figs. 11 and 12 that the proposed codes outperform New Radio polar codes with CRC on average by 0.2 dB at FER PG^{5} for SCL with L=8, 16, 32, 64, 128.
It should be understood that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.
Claims
1. A method of data encoding comprising: dividing payload data to be transmitted into a set of information vectors; multiplying each information vector of the set of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector, wherein the information vector includes k bits, n is a power of 2, n>k, and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d; and multiplying each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword, wherein the codeword has a Hamming weight of 0 or at least d.
2. The method of claim 1 , wherein the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under two constraints.
3. The method of claim 2, wherein the two constraints include a first constraint on the minimum distance d and a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode.
4. The method of claim 1, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for parameters n, k, and d.
5. The method of claims 1 or 4, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for a fixed successive cancellation decoding error probability.
6. The method of claims 1 or 5, wherein the kxn binary precoding matrix includes a kxk identity matrix in k columns corresponding to k nonfrozen bits of a polar code and each row of the kxn binary precoding matrix begins in a column corresponding to a non frozen bit.
7. The method of claims 1 or 6, wherein the kxn binary precoding matrix is configured to minimize the successive cancellation decoding error probability under three constraints.
8. The method of claim 7, wherein the three constraints include a first constraint on the minimum distance d, a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode, and a third constraint that the code is a subcode of a preselected supercode.
9. The method of claim 8, wherein the kxn binary precoding matrix is such that the number of lowweight codewords is minimized for parameters n, k, d, a preselected subcode, a preselected supercode, and a fixed successive cancellation decoding error probability.
10. An encoding apparatus for a wireless communication system comprising: a memory storage configured to store instructions; and one or more processors in communication with the memory storage, wherein the one or more processors are configured to execute the instructions to: divide payload data to be transmitted into a set of information vectors, multiply each information vector of the set of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector, wherein the information vector includes k bits, n is a power of 2, n>k, and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d, and multiply each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword, wherein the codeword has a Hamming weight of 0 or at least d.
11. The apparatus of claim 10, wherein the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under two constraints, and wherein the two constraints include a first constraint on the minimum distance d and a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode.
12. The apparatus of claim 10, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for parameters n, k, and d.
13. The apparatus of claims 10 or 12, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for a fixed successive cancellation decoding error probability.
14. The apparatus of claim 12, wherein the kxn binary precoding matrix includes a kxk identity matrix in k columns corresponding to k nonfrozen bits of a polar code and each row of the kxn binary precoding matrix begins in a column corresponding to a non frozen bit.
15. The apparatus of claims 10 or 14, wherein the kxn binary precoding matrix is configured to minimize the successive cancellation decoding error probability under three constraints.
16. The apparatus of claim 15, wherein the three constraints include a first constraint on the minimum distance d, a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode, and a third constraint that the code is a subcode of a preselected supercode.
17. The method of claim 16, wherein the kxn binary precoding matrix is such that the number of lowweight codewords is minimized for parameters n, k, d, a preselected subcode, a preselected supercode, and a fixed successive cancellation decoding error probability.
Applications Claiming Priority (2)
Application Number  Priority Date  Filing Date  Title 

AU2020901517  20200512  
AU2020901517A AU2020901517A0 (en)  20200512  Channel encoding method and a channel encoder 
Publications (1)
Publication Number  Publication Date 

WO2021226665A1 true WO2021226665A1 (en)  20211118 
Family
ID=78525844
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

PCT/AU2021/050438 WO2021226665A1 (en)  20200512  20210512  "channel encoding method and a channel encoder" 
Country Status (1)
Country  Link 

WO (1)  WO2021226665A1 (en) 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

US20190081731A1 (en) *  20160512  20190314  Huawei Technologies Co., Ltd.  Devices and methods implementing polar codes 

2021
 20210512 WO PCT/AU2021/050438 patent/WO2021226665A1/en active Application Filing
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

US20190081731A1 (en) *  20160512  20190314  Huawei Technologies Co., Ltd.  Devices and methods implementing polar codes 
NonPatent Citations (5)
Title 

BOLCAR LAUREN: "On the weights of linear codes and their dual", FALL, 1 January 2019 (20190101), pages 1  28, XP055872403 * 
TRIFONOV PETER: "Star Polar Subcodes", 2017 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), IEEE, 19 March 2017 (20170319), pages 1  6, XP033093163, DOI: 10.1109/WCNCW.2017.7919043 * 
TRIFONOV PETER; MILOSLAVSKAYA VERA: "Polar codes with dynamic frozen symbols and their decoding by directed search", 2013 IEEE INFORMATION THEORY WORKSHOP (ITW), IEEE, 9 September 2013 (20130909), pages 1  5, XP032536315, ISBN: 9781479913213, DOI: 10.1109/ITW.2013.6691213 * 
TRIFONOV PETER; MILOSLAVSKAYA VERA: "Polar Subcodes", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY., US, vol. 34, no. 2, 1 February 2016 (20160201), US , pages 254  266, XP011593857, ISSN: 07338716, DOI: 10.1109/JSAC.2015.2504269 * 
TRIFONOV PETER; TROFIMIUK GRIGORII: "A randomized construction of polar subcodes", 2017 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), IEEE, 25 June 2017 (20170625), pages 1863  1867, XP033140413, DOI: 10.1109/ISIT.2017.8006852 * 
Similar Documents
Publication  Publication Date  Title 

US11362682B2 (en)  Encoding method and apparatus using CRC code and polar code  
US10361718B2 (en)  Apparatus and method for encoding with cyclic redundancy check and polar code  
EP1525664B9 (en)  Method and system for memory management in low density parity check (ldpc) decoders  
CN109314600B (en)  System and method for rate matching when using generic polarization codes  
CN109314524B (en)  System and method for rate matching through heterogeneous kernels using common polar codes  
KR102061938B1 (en)  Elementary check node processing for syndrome computation for nonbinary ldpc codes decoding  
JP5911188B2 (en)  Lowdensity parity code encoding apparatus and method, and computerreadable recording medium  
CN107979445B (en)  Syndrome decoding based on basic check nodes using preordered inputs  
KR101895164B1 (en)  Methods and devices for error correcting codes decoding  
KR20230079159A (en)  Serial concatenated code with outer block code and inner polarizationadjusted convolutional code  
Miloslavskaya et al.  Design of short polar codes for SCL decoding  
US20050210358A1 (en)  Soft decoding of linear block codes  
CN110999092B (en)  Reduced preordered syndromebased Extended Minimum Sum (EMS) decoding of nonbinary LDPC codes  
Miloslavskaya et al.  Recursive design of precoded polar codes for SCL decoding  
EP3713096B1 (en)  Method and device for decoding staircase code, and storage medium  
Jamali et al.  Lowcomplexity decoding of a class of ReedMuller subcodes for lowcapacity channels  
WO2021226665A1 (en)  "channel encoding method and a channel encoder"  
CN111034055A (en)  Simplified check node processing in nonbinary LDPC decoders  
US9189321B2 (en)  Error correction encoding method, decoding method and associated devices  
Nouh et al.  Efficient serial concatenation of symbol by symbol and word by word decoders  
Ullah et al.  Performance improvement of multistage threshold decoding with difference register  
CN112889221A (en)  Offset value determination in check node processing units for message passing decoding of nonbinary codes  
Zhilin et al.  Generalized error locating codes with soft decoding of inner codes  
Liang et al.  Rateless transmission of polar codes with information unequal error protection  
CN116783826A (en)  Selfisomorphic based polarization encoding and decoding 
Legal Events
Date  Code  Title  Description 

121  Ep: the epo has been informed by wipo that ep was designated in this application 
Ref document number: 21804738 Country of ref document: EP Kind code of ref document: A1 

NENP  Nonentry into the national phase 
Ref country code: DE 

122  Ep: pct application nonentry in european phase 
Ref document number: 21804738 Country of ref document: EP Kind code of ref document: A1 