WO2021226665A1 - "channel encoding method and a channel encoder" - Google Patents

"channel encoding method and a channel encoder" Download PDF

Info

Publication number
WO2021226665A1
WO2021226665A1 PCT/AU2021/050438 AU2021050438W WO2021226665A1 WO 2021226665 A1 WO2021226665 A1 WO 2021226665A1 AU 2021050438 W AU2021050438 W AU 2021050438W WO 2021226665 A1 WO2021226665 A1 WO 2021226665A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
codes
precoding matrix
kxn
polar
Prior art date
Application number
PCT/AU2021/050438
Other languages
French (fr)
Inventor
Vera MILOSLAVSKAYA
Vucetic BRANKA
Original Assignee
University Of Sydney
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2020901517A external-priority patent/AU2020901517A0/en
Application filed by University Of Sydney filed Critical University Of Sydney
Publication of WO2021226665A1 publication Critical patent/WO2021226665A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • H03M13/6312Error control coding in combination with data compression
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/033Theoretical methods to calculate these checking codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2903Methods and arrangements specifically for encoding, e.g. parallel encoding of a plurality of constituent codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2945Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using at least three error correction codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/152Bose-Chaudhuri-Hocquenghem [BCH] codes

Abstract

A channel encoding method and a channel encoder are disclosed. In an example, a method of data encoding includes dividing payload data to be transmitted into a set of information vectors and multiplying each of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector. The information vector includes k bits, and n is a power of 2. Additionally, n is greater than k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d. The method also includes multiplying each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword that has a Hamming weight of 0 or at least d. The method minimizes both successive cancellation (SC) and maximum-likelihood (ML) decoding error probabilities.

Description

CHANNEL ENCODING METHOD AND A CHANNEL ENCODER
TECHNICAL FIELD
The present disclosure generally relates to wireless communication technologies, and in particular, to a channel encoding method and a channel encoder.
BACKGROUND
In a communication system, channel encoding is usually performed to improve data transmission reliability and ensure communication quality. Use cases for the next generation of cellular services are divided into three main categories: enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low-latency communications (URLLC). eMBB and mMTC are designed for services with high bandwidth requirements and communications between delay-tolerant low-throughput machine-type devices, respectively. URLLC focuses on services with strict delay constraints, such as industrial automation, self-driving cars, smart grids, tele-surgery, tactile Internet and intelligent transportation systems. Short block length channel codes with high error-correction capabilities are needed to meet the stringent latency and reliability requirements of URLLC. However, most of the channel codes are designed to approach the channel capacity for long block lengths.
SUMMARY
According to one non-limiting aspect of the present disclosure, there is provided a channel encoding method. In one embodiment, the method includes dividing payload data to be transmitted into information vectors; multiplying an information vector to obtain corresponding input vector, where the information vector includes k bits by a kxn binary precoding matrix, and n is a power of 2, n>k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under constraint on the minimum distance d; multiplying the input vector including n bits by a nxn polarization transformation matrix to obtain corresponding codeword, where each codeword has a Hamming weight 0 or at least d.
According to another non-limiting aspect of the present disclosure, there is provided an encoding device for a wireless communication system. In one embodiment, the encoding device includes a memory storage configured to store instructions, and one or more processors in communication with the memory storage. The one or more processors execute the instructions to divide payload data to be transmitted into information vectors; multiply an information vector to obtain corresponding input vector, where the information vector includes k bits by a kxn binary precoding matrix, and n is a power of 2, n>k and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under constraint on the minimum distance d; multiply the input vector including n bits by a nxn polarization transformation matrix to obtain corresponding codeword, where each codeword has a Hamming weight 0 or at least d.
According to the present disclosure, a preferred outcome is that the precoded polar codes can be decoded with much lower computational complexity, construction of the precoded polar codes is more flexible (code dimension can take any value), and construction of the precoded polar codes has higher error-correction capability.
It should be understood that the outcomes described herein are not limited, and may be any of or different from the outcomes described in the present disclosure.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgement or admission or any form of suggestion that the prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavor to which this specification relates.
BRIEF DESCRIPTION OF THE FIGURES
Features of a channel encoding method and a channel encoder described herein may be better understood by reference to the accompanying drawings in which:
FIG. 1 is a flowchart illustrating a communication system, according to an embodiment of the present disclosure.
FIG. 2 is a schematic diagram illustrating an example of wireless communication system, according to an embodiment of the present disclosure.
FIG. 3 is a flowchart illustrating a channel encoder, which is a component of a communication system, according to an embodiment of the present disclosure. FIG. 4 is a flowchart illustrating an example of encoding process for (64,25,16) precoded polar code, according to an embodiment of the present disclosure.
FIG. 5 is a diagram illustrating performance of (64, 25) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 6 is a diagram illustrating performance of (64, 34) codes under stack successive cancellation decoding algorithm with list size L according to an embodiment of the present disclosure.
FIG. 7 is a diagram illustrating decoding complexity of (64, 25) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 8 is a diagram illustrating performance of (64, 34) codes under stack successive cancellation decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 9 is a diagram illustrating decoding complexity of (64, 25) codes under sequential decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIG. 10 is a diagram illustrating performance of (64, 34) codes under sequential decoding algorithm with list size L, according to an embodiment of the present disclosure.
FIGS. 11 and 12 are diagrams illustrating FER performance of codes listed in Table III and 5G New Radio polar codes with CRC, according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
The present disclosure generally relates to the wireless communication technologies, and in particular, to a channel encoding method and a channel encoder. Embodiments are described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Various code surveys show that the original polar codes, turbo codes, and low-density parity-check (LDPC) codes perform far away from the theoretical bounds in the short block length regime, while the best results under optimal decoding are demonstrated by Bose-Chaudhuri- Hocquenghem (BCH) codes, followed by high memory order tail-biting convolutional codes. However, convolutional codes are unable to compete with the BCH in a high signal-to-noise ratio (SNR) regime.
Polar codes can achieve the Shannon capacity of a wide class of data communication channels for large block lengths. These codes have a simple recursive structure, which allows low- complexity decoding methods such as successive cancellation (SC) algorithm, SC list (SCL), and their variations.
The original polar codes can be constructed by freezing the least reliable bit-subchannels. This design principle is appropriate for long length codes and SC decoding. The original polar codes of short and moderate lengths demonstrate an inferior frame error rate (FER) compared to BCH, LDPC and turbo codes. The reasons are suboptimality of SC decoding and poor minimum distance of polar codes. It is possible to fully use the error-correction capability of polar codes by using SCL decoding instead of SC decoding. Several modifications of the polar code construction can be introduced to improve the minimum distance of polar codes.
Polar codes with frozen bit-subchannels selected to maximize the minimum distance result in Reed-Muller codes. Reed-Muller codes have a smaller FER than the original polar codes under maximum-likelihood (ML) decoding, but a higher FER under SC decoding. Hybrid codes, combining constructions of Reed-Muller codes and the original polar codes, result in frozen bit- subchannels “interpolates” between sets of frozen bit-subchannels of a Reed-Muller code and the original polar code. It should be understood that Reed-Muller codes are channel-independent, while polar codes are fine-tuned to the channel polarization phenomenon. Minimum distance properties may be also improved by concatenation of an inner polar code with an outer cyclic redundancy check (CRC).
It is possible to generalize the polar code construction by introducing dynamic frozen bits. As opposed to the original frozen bits, dynamic frozen bits are functions of information bits. These functions are called dynamic freezing constraints. The appropriate selection of dynamic freezing constraints leads to a further increase in the minimum distance. Code performance under SCL decoding depends on both indices of frozen bits and dynamic freezing constraints, while only indices of frozen bits are important for SC decoding. Dynamic freezing constraints can be derived from high-rate extended narrow-sense primitive BCH (e-BCH) codes, the resulting codes are called polar subcodes of e-BCH codes. The tradeoff between SC and ML decoding error- probabilities depend on the selected rate of e-BCH code. BCH codes are known for their high minimum distance. Binary narrow-sense primitive BCH codes have length 2m - 1 and form the most well-studied subclass of BCH codes. Randomized polar subcodes give a performance improvement under SCL due to a reduced number of low- weight codewords.
A heuristic construction of parity-check-concatenated polar codes exists. It should be understood that parity checks may be viewed as freezing constraints. An algorithm to generate connections between information symbols and parity checks was developed, but no results on the achieved minimum distance were provided.
In the present disclosure, new short length codes configured to minimize the SC decoding error probability of precoded polar codes with a specified minimum distance are provided. The short length codes may be decoded through SCL and its variations. Numerical results show that the codes of lengths 32 and 64 provide a significant reduction of decoding error probability under stack SC and sequential decoding with moderate list sizes compared to polar subcodes of e-BCH codes. The choice of polar subcodes of e-BCH codes is explained by their high minimum distance. The codes have the same lengths and dimensions as known codes, hence the same code rates. The obtained weight enumerator polynomials and the tangential- sphere bound indicate that reduction of SC decoding error probability is achieved without sacrificing the error probability under ML decoding.
The present disclosure also provides a method to recursively construct a collection of precoded polar codes that are optimized for SCL-based decoding. The proposed code design minimizes the SC decoding error probability estimate for each precoded polar code under three constraints. The first constraint is the minimum Hamming distance requirement, which improves the ML performance of the resulting code. The other two constraints specify that the resulting code belongs to a preselected code, called a supercode, and includes another preselected code, called a subcode. The subcode ensures reduction of the search space size, which is the space of all row candidates for the generator matrix. In case of precoded polar codes, the search for rows of a generator matrix is equivalent to the search for rows of a precoding matrix. The subcode is selected among codes of lower dimension constructed by the same disclosed method. The supercode is obtained as the Plotkin sum of codes with the two times shorter length constructed earlier by the same method, where the component codes are nested to simplify the computation of low-weight codewords. These low- weight codewords are used to ensure fast computation of the minimum Hamming distance. Note that the exact number of minimum weight codewords is not only computed, but also the exact values of several first terms of the code spectrum. The proposed choice of super codes and subcodes leads to a recursive structure and many nested codes in the resulting code collection. The simulation results indicate that the proposed precoded polar codes of lengths 128 and 256 provide a much lower block error probability than the original polar codes, polar codes with CRC from 5G New Radio and e-BCH polar subcodes under the stack SC decoding.
Fig. 1 is a flowchart illustrating a communication system, which can employ the new codes for channel encoding according to embodiments of the present disclosure. Data, produced by an information source 104, is processed by a transmission system 101. The transmission system 101 comprises a source encoder 105 to compress data, a channel encoder 106 to ensure reliability by introducing redundancy into data, a modulator 107 to convert symbols into signals, and a transmitter 108 to send communication signals into a transmission/storage medium 102, which may introduce noise into signals. A reception system 103 comprises a receiver 109 to detect and receive the noisy signals, a demodulator 110 to produce soft or hard symbol estimates, a channel decoder 111 to reconstruct information from soft/hard symbol estimates, and a source decoder 112 to decompress information, which is further delivered to an information destination 113. It will readily be appreciated that the demodulator 110, channel decoder 111, and the source decoder 112 perform the inverse of the operations performed by the modulator 107, the channel encoder 106, and the source encoder 105, respectively, subject to limitations imposed by noise effects and other non-idealities in the system. Properly designed and operated communication system should deliver data from the information source 104 to information destination 113 in unaltered form with high probability.
It is to be understood that the present principles may be applied to various transmission systems and media. For example, the transmission medium can be represented by space or the atmosphere, in which case the communication system in Fig. 1 is a wireless communication system. If the transmission medium is represented by a wire, e.g. fiber optic cable, then the communication system in Fig. 1 is a wired communication system. Hard disks drives, solid state disks, optical disk drives and magnetic tapes are examples of storage media employed in storage systems. Embodiments of the present disclosure can be implemented for both wireless and wired transmission systems, as well as for storage systems. The present disclosure focuses on wireless communication systems, in such a case the transmission system 101 and the reception system 103 include antenna(s) coupled respectively to the transmitter 108 and the receiver 109.
Fig. 2 shows an example of wireless communication system to which embodiments of the present disclosure can be applied. A network device 201 communicates with one or more terminal devices 206. Examples of the terminal devices 206 are a mobile device 202, an automated car 203, a drone 204, and an industrial robot 205, which are related to URLLC application scenarios. The network device 201 may be a base station or another device with similar communication functions. The network device 201 and the terminal devices 206 in Fig. 2 can include both the transmission system 101 and the reception system 103 from Fig. 1. The devices 201-206 are the transmission systems when sending signals, and the reception systems when receiving signals. As transmission systems, the devices 201-206 include channel encoders and perform encoding as described in the present disclosure.
Fig. 3 is a flowchart illustrating a channel encoder, which is a component of the communication system from Fig. 1. The channel encoder 307 includes components related to offline code design and an online encoding process. The online encoding process includes splitting input data into information vectors 304, encoding of the information vectors with the precoding matrix 305, and further encoding with the polarization transformation matrix 306 according to embodiments of the present disclosure. Encoding with the precoding matrix 305 is performed in accordance with system settings provided by a code selector 303. The code selector 303 is configured to choose code parameters and extract code specification from precoding matrix storage 302, which contains precoding matrices described in embodiments of the present disclosure. The precoding matrices are generated in advance offline by a precoding matrix constructor 301, operating according to the SC-ML optimization algorithm described in embodiments of the present disclosure. Polar Codes
An (n = 2m, k ) pure polar code with a set of frozen bits T <º U, \ T\ = n - k is a binary linear block code generated by k rows Mi:_ with indices i 6 Ί1 \ T, where M = B A®m is n x n polarization transformation matrix, A = ( j is 2 X 2 kernel of polarization, ®m denotes
Figure imgf000010_0001
m-times Kronecker product of a matrix with itself, B is a bit reversal permutation matrix and U = {0, ..., n - 1}.
Alternatively, an (n = 2m, k ) pure polar code can be specified by the set of information bits c/Z = V. \ T, \<A I = k. Any codeword c = (c0, . . . , cn-x) of a polar code can be represented as c = u · M, where u = ( u0 , , un.±) is an input vector consisting of k information bits and n - k frozen bits fixed with predefined values known to the decoder, while the positions of frozen bits are specified by T. For the purpose of simplicity, it is assumed that frozen bits of a pure polar are zero. The minimum distance of a pure polar code is given by the lowest row weight among rows of the matrix M with indices from the information set <A, since the minimum distance is lower bounded by the minimum distance of a Reed-Muller code, being a supercode for the polar code. More precisely, the minimum distance of a pure polar code is given by
Figure imgf000010_0002
e c A], where wt(i)is the Hamming weight of the binary representation of the integer i. Thus, the minimum distance is equal to a power of two and it is defined by the set T, i.e. indices of frozen bits.
An (n, k ) pure polar code has a set of frozen bits T, given by indices of the n — k “worst” bit-subchannels. More precisely, they are chosen as n — k bit- subchannels with the highest Bhattacharyya bound on the probability of decision error at the i — th input bit. Likewise, bit- subchannels are commonly ranked according to their probabilities of error, which is a more straightforward criterion. Polar codes constructed according this rule are referred to as the original polar codes. The original polar codes are designed to minimize an upper bound on the SC decoding error probability.
In a general case of binary-input memoryless output- symmetric channels, estimating the quality of bit- subchannels requires employing the density evolution. Efficient implementation of density evolution was developed in a known art. A low-complexity alternative based on Gaussian approximation of density evolution was developed for an AWGN channel in a known art. SC and SCL Decoding
Consider a binary-input memoryless output-symmetric channel with output alphabet Y and transition probabilities PQy> |0) and P(/ip\l), ^ 6 y. The SC decoding of a pure polar code of length ( n = 2m) is performed in n phases. At the f-th phase of the SC algorithm, an estimate iΐf of the input bit
Figure imgf000011_0001
6 {0, 1} is computed, where f = 0, ..., n - 1. Since frozen bits iΐf have predefined fixed values known to the decoder, their estimates ύf are set to these values, where f 6 T. The estimate iΐf of the information bit iΐf is defined by the log-likelihood ratio (LLR)
Figure imgf000011_0002
where y = Qy>0, , l^n-i) is the received noisy vector. If ί,f > 0, then the estimate iΐf 0 otherwise ύy <- 1, where f E <A.
The SC decoding error probability can be estimated as
Figure imgf000011_0003
here Pl is the probability of correct decoding for the i-th bit-subchannel given a genie had revealed the true values of previous bits Uj, 0 < j < i. Here correct decoding corresponds to the event iiL = uL. The error correction capability can be improved using SCL, which constructs a list of vectors (u0, u-n-t) instead of a single one. Observe that SCL decoding with list size L = 1 reduces to SC decoding. The performance of the SCL decoding with the sufficiently large list size approaches the performance of the maximum-likelihood (ML) decoding. A similar performance, with a much lower complexity compared to the SCL, can be obtained by employing the sequential decoding algorithm.
Polar-Like Codes: Dynamic Frozen Bits
It is possible to generalize the polar code construction by setting frozen bits to some linear functions of the information bits instead of fixed values. This does not affect the behaviour of the SC decoder and the performance of bit-subchannels of the polarization transformation.
Since n X n matrix M is invertible, any ( n = 2m, k, d) binary linear code C with check matrix H can be obtained as an appropriate subspace of matrix M row space. Constraints for input vector u may be specified by the equation u · M HT = 0. By applying the Gaussian elimination, one can construct a freezing constraint matrix V = Q · H MT such that the values h(ί) = max{t|Pj t = l], are distinct for i = 0, . . . , n — k — 1, where Q is an (n — k) x (n — k) invertible matrix. That is the i-th rows of the matrix V end in the distinct columns p(i). The set of frozen bits is given by T = (77 (i) 10 < i < n — k}. SC/SCL decoding is enabled by computing the values of the frozen bits as
Figure imgf000012_0001
A frozen bit uv^ equal to a linear combination of preceding input bits is referred to as a dynamic frozen bit.
A construction of polar codes with dynamic frozen bits based on narrow-sense primitive e- BCH codes was introduced in a known art. According to it, a code of the length n = 2771 and dimension k is constructed as follows. At first an e-BCH code with such parameters (n, K, d) is selected, that the minimum distance d is sufficiently high and the dimension K ³ k. This e-BCH code is used to derive freezing constraints for n — K dynamic frozen bits. Then K — k additional freezing constraints are imposed to obtain an (n, k < K, ³ d) code called a polar subcode of e- BCH code. Here the notation (n, k, ³ d ) is used to denote the code of length n, dimension k and with the minimum distance at least d.
Polar subcodes of e-BCH codes outperform the pure polar codes under the SCL decoding with a sufficiently large list size. The SC decoding error probability for a code obtained in this way is not less than that of the original polar codes constructed for the same channel.
Minimum Distance
It should be understood that the minimum distance denotes the minimum Hamming distance throughout this disclosure. The minimum distance of pure polar codes scales at most roughly as Vn. Since their minimum distance is relatively low, the original polar codes are unable to compete with e-BCH codes and Reed-Muller codes under ML decoding. The performance of codes in high signal-to-noise ratio (SNR) region is mostly defined by the minimum distance. A Reed-Muller code can be represented as a pure polar code designed to maximize the minimum distance. Thus, it is possible to construct the pure polar codes with the higher minimum distance than that of the original polar codes, i.e. the ML decoding error probability may be reduced by the appropriate selection of the frozen bit-subchannels. So, there are pure polar codes being able to outperform the original polar codes under SCL decoding. However, the minimum distance of a pure polar code, as well as of a Reed-Muller code, may be equal only to a power of two. Moreover, even if there exists an (n = 2m, k, d = 2W) binary linear block code, there is no guarantee of the existence of a pure polar code with the same parameters, e.g. there is an (64, 24, 16) e-BCH code, but the highest dimension of a pure polar code of length 64 and the minimum distance 16 is equal to 22. So, polar codes with dynamic frozen bits are able to outperform pure polar codes under SCL decoding.
It should be understood that in case of polar code design, the price for performance improvement under ML decoding is performance degradation under SC decoding, since the increase in the minimum distance is achieved by changing the set of frozen bits of the original polar code, which is optimal for the SC decoder. A modification of the set of frozen bits of the original polar code leads to a number of unreliable bit-subchannels being unfrozen.
The present disclosure focuses on the performance of polar codes under SCL decoding with various list sizes.
According to an embodiment of the present disclosure, a problem of optimization of precoded polar codes for SCL decoding is considered. To reduce the search space, precoding matrices with a specific structure are provided. A code design that minimizes a function of both SC and ML decoding error probabilities for short-length codes is introduced afterwards.
Precoded Polar Codes
A method of representing any binary linear block code of length n = 2771 as a polar code with dynamic frozen symbols is described above. According to that method, a polar code with dynamic frozen symbols is specified by a freezing constraint matrix, which is derived from the check matrix. This code representation is used to perform decoding through SCL and its variations.
In general, one can also represent any linear block code C of length n = 2m as a polar code precoded by a k x n matrix Z = G · M-1, where G is a generator matrix of code C. Such precoding matrix Z always exists, since polarization transformation matrix M is invertible. Note that one can perform linear operations over rows of the precoding matrix Z without affecting code C, i.e. use any precoding matrices T · Z, where T is a k x k invertible matrix. Such codes are referred to as precoded polar codes, since their encoding process includes precoding followed by polar encoding. It is desirable to identify precoding matrices which minimize SCL decoding error probability. There is no analytical expression for SCL decoding error probability, so a search for (n, k, ³ d) precoded polar codes with low both SC and ML decoding error probabilities is proposed. The SC
Figure imgf000014_0001
decoding error probability Pe can be estimated as described above.
An estimate of ML decoding error probability Pg ML ^ can be obtained through simulations or as an upper/lower bound value. There are a number of methods to estimate the ML decoding error probability. Most parts of the bounds on the ML decoding error probability for linear block codes are functions of their weight enumerator polynomial (spectrum), where either all of the components of the code spectrum are employed or a part of the components of the code spectrum are employed (partial spectrum). According to one implementation, tangential-sphere upper bound is employed, which is one of the tightest bounds established for binary linear block codes and it is also based on the code spectrum. In most cases, codes are designed for sufficiently high SNRs. Code optimization for target P}ML ^ in the range 10-4 — 10-2 is a focus, which corresponds to a relatively high SNR. In the high SNR regime, p}ML^ is mostly dependent on the code minimum distance and to a less degree on precise values of non- zero components of the code spectrum.
Since the number of ( n , k, > d) precoded polar codes is equal to the total number of ( n , k, > d) codes, it is computationally impossible to select an (n, k, > d ) precoded polar code
(SC') with the lowest SC decoding error probability Pe by performing a full search and computing pjsc) for each code. However, one can at first select a set of frozen bits T. Recall that p}sc^ depends only on the set of frozen bits and channel properties. But there is no guarantee that a code with the minimum distance at least d exists for the preselected T. This is in contrast to designing a pure polar code, where exactly one code exists for any given set T, length n = 2771 and SNR. Recall that in case of near ML decoding, low minimum distance of pure polar codes leads to inferior performance compared to extended BCH codes, quadratic residue codes and other codes with a high minimum distance. In the present disclosure, it is assumed that the SNR is fixed to some specific value and that all codes are optimized for that SNR.
Polar Codes as Generalized Concatenated Codes
In some instances, any pure polar code can be represented as a generalized concatenated code (GCC). Since
Figure imgf000015_0001
the generator matrix G of an (n = 2m, k) pure polar code can be represented as:
Figure imgf000015_0002
where Z is a full-rank k X n precoding matrix with at most one non- zero element in each column. Thus, the overall polar code of length Z 771 is decomposed into a number of outer codes of length 2771-1 and inner codes of length 2l, where the outer and inner codes are also pure polar codes. Note that if / = 1, then the above expression corresponds to a Plotkin sum of two linear codes Co and Ci resulting in a linear code C given by:
Figure imgf000015_0003
If parameters of codes Co and Ci are (M/2, ko, do) and (M/2, ki, di ), respectively, then the resulting code C has parameters (M, k = ko + ki, d = min (do, 2 · di)).
The code performance could be improved by replacing outer polar codes with e-BCH codes, Reed-Solomon codes, convolutional codes, LDPC codes or any other codes with high error correction capability. The cost of better performance is the necessity to use appropriate decoders for outer codes, such as an ordered statistics decoder or a box and match decoder in case of e-BCH codes, which lead to a substantial increase in the overall computational complexity of decoding.
SC Aimed Precoding Matrix
Consideration is given to designing a precoding matrix for a polar code with dynamic frozen bits assuming that a set of frozen bits T is already selected. The presence of a fixed set T restricts possible dependencies between input bits of the polarizing transformation. Dynamic frozen bits are equal to linear combinations of bits with lower indices only. Instead of designing a code by optimizing dynamic freezing constraints, rows of the precoding matrix are optimized.
Generator matrix G is represented as a product of k x n precoding matrix Z and n x n polarization transformation matrix M, that is G = Z M. By multiplying both sides by transposed check matrix H one obtains Z · M · HT = 0. Recall that dynamic freezing constraint matrix V = Q · H MT and that Q is an (n - k) x (n - k ) invertible matrix, so Z VT = 0. Thus, precoding matrix Z provides an alternative definition for a polar code with dynamic frozen bits. A polar code with dynamic frozen bits may be specified by set T together with either freezing constraint matrix V or precoding matrix Z. The k X k identity matrix is denoted by I(-kxk\
It can be shown that given dynamic freezing constraint matrix V, there exists such a precoding matrix Z that columns with indices from the set <A form an identity matrix, i.e. Z_ ^ =
Figure imgf000016_0001
row begins at such a position s(q) that |{t 11 < s(q) & t 6 <A]\ = q. Here q-th row begins at s-th position means that Zq s = 1 and Zq § = 0 for all s < s.
Precoding matrices Z satisfying this condition are referred to as SC aimed precoding matrices. It follows from this condition that it is sufficient to perform search only over SC aimed precoding matrices Z. The number of SC aimed precoding matrices Z depends on the selected set T. This can be illustrated by two extreme cases. In case of T = {0, ... , n — k — 1}, there is only one SC aimed precoding matrix Z, more precisely, Z
Figure imgf000016_0002
denotes a k x (n — k) zero matrix; a SC/SCL decoder processes at first all frozen bits and only then starts to make decisions on values of the information bits. The other extreme case is T = {k, ... , n — 1}. The number of precoding matrices in this case is equal to 2kx(n-k) and Z =
Figure imgf000016_0003
where D is an arbitrary k X (n — k) binary matrix. In general, bit-subchannels with higher indices have better reliabilities, so frozen sets with low Pe
Figure imgf000016_0004
are closer to the first extreme case and the number of SC aimed precoding matrices is moderate for code lengths 32 and 64, meaning that it is possible to perform a full search over these matrices.
Fixed and Variable Rows of Precoding Matrix
The structure of SC aimed precoding matrix Z is such that the i-th row of generator matrix G = Z M is given by a linear combination of the i-th row of matrix ML _ and some y'-th rows of matrix MT > i, where MJJ is a submatrix of matrix M given by rows with indices i 6 I and columns with indices j E J. Rows of ML _
Figure imgf000017_0001
are referred to as information rows and frozen rows, respectively. Thus, all information rows of M with indices higher than ma x(T) are included into generator matrix G in the unaltered form. These rows will be referred to as fixed rows, since any SC aimed precoding matrix Z corresponding to T must include them. The number of fixed rows is equal to n — 1 — max(P). If the weight of at least one fixed row is lower than d, then it is impossible to construct such matrix G that the code minimum distance is at least d, and corresponding frozen set T should be dismissed. Recall that the minimum distance of a pure polar code is equal to the smallest row weight of its generator matrix ML _ . The remaining non-fixed rows of the generator matrix G are referred to as variable rows.
Optimization Criteria
The precoded polar code design problem is a problem of constrained multi- objective optimization, where a constraint is given by the desired minimum distance d, the first objective function is P and the second objective function is Pj,ML\ Observe that construction of the whole Pareto-optimal set is infeasible due to a large number of codes. The use of the weighted sum method to produce a single objective by weighting
Figure imgf000017_0002
cumbersome, since the relative importance of Pe (i,c) and Pe (ML) vary depending on the list size of the SCL decoder. This paper focuses on the SCL decoding with moderate list sizes and, therefore, we give the highest
(.sc' priority to the first objective Pe ) .
The code design algorithm finds a code C of length n and dimension k satisfying the following criteria:
1) its minimum distance is at least d,
2) its frozen set f* e F satisfies the condition
Figure imgf000017_0003
for all frozen sets/1 e F corresponding to ( n , k, ³ d) codes with (optional) subcode C~ and (optional) supercode C+; 3) (optional) its SC aimed precoding matrix Z* satisfies condition for
Figure imgf000018_0004
generator matrices G of all ( n , k, ³ d) codes with frozen sets such that
Figure imgf000018_0005
Figure imgf000018_0001
4) (optional) its SC aimed precoding matrix Z* satisfies for all rows z such that is a SC aimed precoding matrix
Figure imgf000018_0006
Figure imgf000018_0007
specifying an code with the same frozen set as
Figure imgf000018_0009
Figure imgf000018_0008
5) (optional) C contains a preselected code C~ as a subcode, i.e. and
Figure imgf000018_0010
6) (optional) C is contained in a preselected code C+ as in a super code, i.e.
Figure imgf000018_0011
F is a set of preselected frozen sets/1 c U,
Figure imgf000018_0002
is an estimate of the maximum-likelihood decoding error probability computed for a linear block code with generator matrix G. The semicolon in stands for the vertical concatenation of two matrices.
Figure imgf000018_0013
Figure imgf000018_0012
is calculated as described above. As it was mentioned above, the tangential-sphere bound is used to obtain an estimate
Figure imgf000018_0003
o obtain numerical results for codes of length 32 and 64. As provided below, two optimization modes are considered:
• global, codes satisfy criteria 1, 2, 3 and automatically 4; and
• local, codes satisfy criteria 1, 2, 4.
In contrast to the local mode, the global optimization mode guarantees the compliance with the 3rd Criterion, which stands for finding the global minimum of over all ( n , k, ³ d) codes
Figure imgf000018_0014
with the lowest possible . Criteria 3 to 6 are marked as “optional”, since the criterion may not
Figure imgf000018_0015
be satisfied by the proposed code design algorithm. In case of the local mode, the 3rd Criterion is relaxed to reduce the computational complexity, resulting in the 4th Criterion. The proposed code design approach in the local mode is an instance of a greedy algorithm.
It should be recognized that the proposed approach in the both modes ensures the global minimum of
Figure imgf000018_0016
under the constraint on the minimum distance d, while words “local” and
“global” characterize minimization of According to our simulation results, the global mode
Figure imgf000018_0017
is suitable for all codes of lengths 32, and codes of length 64 with a low number of variable rows. Therefore, the local mode is employed for the remaining codes of length 64. Optional Criteria 5 and 6 are used to reduce the computational complexity. Criterion 6 ensures fast computation of the minimum distance, which is required to meet Criterion 1. Criterion 5 ensures a reduction of the search space size, which is the space of all row candidates for the generator matrix. In case of precoded polar codes, the search for rows of the generator matrix is equivalent to the search for rows of precoding matrix.
Complexity of the proposed code design approach is dominated by the minimum distance computation. In case of codes of length up to 64, the exhaustive search over all codewords of a code C or its dual code may be performed to find the minimum distance and spectrum of code C. In case of codes of length higher than 64, codes are designed in accordance with Criterion 6, where a preselected super code C+ is used together with a list of all its codewords of weight up to (d-\ ), Presence of this list ensures low-complexity computation of the minimum distance d ’ and partial spectrum of code C if C c C+ and d’<d.
According to one implementation, each of C- and C+ is obtained using the proposed code design algorithm or given by the Plotkin sum of two codes of length M/2, where the two codes of length M/2 are constructed using the proposed code design algorithm and a permutation is optionally applied to codewords of the resulting code.
Example Algorithm
An example algorithm for the design of short length precoded polar codes that ensures a
(sc^ specified minimum distance and the lowest possible SC decoding error probability Pe is
(sc^ provided. This is a first algorithm that is able to minimize Pe for given code parameters (n, k, ³ d).
Variable rows of n x k precoding matrix Z are constructed one by one. A row ZL _ specifies a corresponding row of generator matrix Gt 0 < i < k. Among all row candidates for parameters ( n, k, ³ d), eligible candidates are selected by checking the minimum distance of the code generated by Gj
Figure imgf000019_0001
The number of candidates for row ZL _ is equal to N J (L) = 21'/ 1 te:r&i->s<'i)}| · it can be seen that Nt(i) is a non-increasing function of i, since s(i) increases with i. Rows of Zj _ are generated in decreasing order of their indices i to reduce the total number of row candidates to be considered, 0 < i < k. A partially constructed matrix Z is referred to as a path. A score is assigned to each path, and pairs (path, score) are stored in a priority queue. Observe that it is sufficient to actually store only variable rows of paths constructed for the same frozen set T, since fixed rows are the same for these paths. At each iteration of the proposed algorithm a path with the best score is selected from the queue, and search for the next variable row is performed. A priority queue
(sc^ contains paths of various lengths. The score is a function of SC decoding error probability Pe estimate of the ML decoding error probability Pg ML ^ and the path length equal to the number of already selected rows, i.e. Score(path ) = As
Figure imgf000020_0001
it was mentioned above, the tangential-sphere bound (TSB) is employed to obtain estimate p}ML^ for codes of length 32 and 64. The lowest value
Figure imgf000020_0002
is guaranteed by employing such score functions. During simulations, score functions for global and local optimization of Pe (ML) were considered. Here global optimization means finding global minimum of Pe (ML) among all (n, k, ³ d ) codes with the lowest possible Pg
Figure imgf000020_0003
. In case of global optimization, Score(path ) is used independent of RowNum(path). It is suitable for all codes of lengths 32 and some codes of length 64. In case of length 64 and a large number of variable rows, local optimization is used, where priority is given to longer paths, meaning that the best path in the priority queue is the path with the lowest value of Pe (ML) among paths with the highest value of RowNum(path ) and the lowest possible Pe
Figure imgf000020_0004
. In the case of local optimization, the proposed approach is an instance of a greedy algorithm. It should be recognized that in both considered cases, the proposed approach
(sc^ ensures global minimum of Pe , while “local optimization” and “global optimization” are related only to P}ML\
A disclosed optimization algorithm is summarized by Algorithm 1, shown below. It computes k x n precoding matrix Z to specify a precoded polar code of length n, dimension k and minimum distance at least d. A set F of preselected frozen sets is an additional input argument, which reduces the search space. A code being generated is allowed to have only frozen sets belonging to F. If there is at least one fruitful T 6 F, then the SC-ML optimization algorithm will return precoding matrix Z, otherwise an empty queue will be observed at some iteration. A frozen set T is referred to as fruitfiil for code length n and minimum distance d, if there exists an ( n , k, ³ d) code corresponding to T, where , and as barren, otherwise. It is sufficient
Figure imgf000021_0001
to take set F consisting of all sets of frozen bits with sufficiently low
Figure imgf000021_0002
Lines 3 - 7 describe initialization of fixed rows of precoding matrix, while lines 8 - 23 describe the search for variable rows. Fixed rows of precoding matrix Z are uniquely defined by a set of frozen bits. More precisely, s
Figure imgf000021_0003
consisting of the highest indices of frozen bits, is used to identify all possible fixed parts of
Figure imgf000021_0004
for frozen sets from F on line 5. Only barren frozen sets T may be affected by intersection of
Figure imgf000021_0006
on line 4. Observe that corresponding rows of the generator matrix are given by
Figure imgf000021_0005
_. Queue Q is initialized by paths given by partially constructed SC aimed precoding matrices Z together with corresponding scores. As described above, Score
Figure imgf000021_0007
( ) is a function of SC decoding error probability \ estimate for
Figure imgf000021_0009
ML decoding error probability and the number of selected rows.
Figure imgf000021_0010
is computed for an
Figure imgf000021_0008
AWGN channel by using a method based on a Gaussian approximation.
Figure imgf000022_0001
At each iteration of while loop function GetBest returns path with the lowest score value, that is the lowest error probability estimate. The search for new variable rows continues till a fully initialized path is extracted from the queue on line 9. A path Z is fully initialized if it consists of k rows (line 10), where k is the required code dimension. In this case, the path represents a precoding matrix Z (line 11).
Possible extensions for path Z are considered on lines 13 -21, which are possible ways to increase the code dimension by one. Function GetFrozenSet returns frozen set T corresponding to the code specified by SC aimed precoding matrix Z . According to the structure of a SC aimed precoding matrix, 1 s of a variable row correspond to a single information row of M and a number of frozen rows with higher indices. Here B is a set of the corresponding information row indices. Set B consists of the highest values among indices of not yet utilized information rows of matrix M. Frozen set T and each b 6 B specify a number of possible row candidates z for the precoding matrix. It can be seen that the index of information row decreases with the increasing number of rows of precoding matrix Z .
Possible row candidates z 6 z for the precoding matrix are considered on lines 17 - 20, where the set of row candidates for 7 and each b is given by all binary vectors with a fixed zero subvector and 1 on the position b. The number of such candidates is equal to
Figure imgf000023_0001
F unction Min Distance on line 18 returns the code minimum distance, where the code is specified by generator matrix Z · M, and optionally, a supercode with a list of its low-weight codewords. If the code minimum distance is at least d, then candidate z specifies an eligible row and the corresponding precoding matrix is stored in the queue on line 19.
Obtained precoding matrix Z may be further used to calculate generator matrix G <- Z M. For decoding purposes, the code is represented as a polar code with dynamic frozen bits.
If the set of frozen sets F is given by a single 7, then the SC-ML optimization algorithm simplifies, since each of sets G and B also consists of a single element. If the input frozen set is barren, then an empty queue, i.e. |Q| = 0, would be observed at some iteration. One can arrange
(SC^) frozen sets 7 in ascending order of Pe , and call the SC-ML optimization algorithm individually for each 7 till a precoding matrix Z is obtained on line 11, meaning that a fruitful frozen set is discovered. The Score
The score computation is summarized by Algorithm 2. Function Score(Z, F) receives a partially constructed precoding matrix Z and a set of frozen sets F, where Z specifies a subcode of the desired (n, k, ³ d) precoded polar code. A macro parameter mode distinguishes between global and local optimization modes defined above. Frozen set F defined by Z is used to obtain an estimate of the SC decoding error probability for (n, k, > d) code being
Figure imgf000024_0005
designed. On line 3, Pe (/·) is calculated according to expression (3), where we use Gaussian approximation of the density evolution to compute probabilities
Figure imgf000024_0001
for an AWGN channel. The ML decoding error probability is estimated on line 4 as follows. The spectrum is computed for a code specified by generator matrix Z ·M. According to one implementation, desired estimate
Figure imgf000024_0002
is next obtained by applying the tangential-sphere bound of Poltyrev, where the recently computed spectrum and code parameters (< n , k) are used as input arguments.
Algorithm 2 returns a score given by an ordered pair (p sc{
Figure imgf000024_0003
in case of the global mode (line 5) or an ordered triplet
Figure imgf000024_0004
in case of the local mode (line 6). Scores and corresponding paths are stored in a priority queue by Algorithm 1. Sorting of scores in the priority queue is performed according to the lexicographical ordering, which is defined for ordered pairs by (ao, ai) < {bo, bi) iff ( ao < bo) or ( ao = bo) &
Figure imgf000025_0003
Algorithm 1 employs function GetBest, which should choose a path with the best score from the priority queue. We define the “best” score as a score preceding all other scores according to the lexicographical ordering, where the score is given by an ordered pair or an ordered triplet. Therefore, in case of the global optimization mode, function GetBest returns a path with the lowest value of
Figure imgf000025_0004
among paths with the lowest In case of the local optimization mode, the
Figure imgf000025_0005
priority is given to longer paths, meaning that GetBest returns a path with the lowest value of
Figure imgf000025_0008
among paths with the highest path length n - among paths with the lowest
Figure imgf000025_0001
. Here path
Figure imgf000025_0006
length is equal to the number of rows of matrix Z.
Figure imgf000025_0007
Preselected Sets of Frozen Bits
As described above, it was assumed that input set F of frozen sets T is known. Let
Figure imgf000025_0009
be a set consisting of all sets with SC decoding error probability
Figure imgf000025_0010
Figure imgf000025_0011
lower than a certain threshold T. If T is sufficiently high, then (n, k, ³ d) precoded polar code will be found by the Algorithm 1.
As described above, the SC decoding error probability for a polar code with information set <A is given by
Figure imgf000025_0002
There are a number of ways to find all such subsets <A of cardinality k in a set of cardinality n that condition is satisfied. No
Figure imgf000025_0012
particular algorithm is specified, since it does not affect the result.
There are a lot of frozen sets with similar values of , so the cardinality of set
Figure imgf000025_0013
Figure imgf000025_0014
is large. Let be a fruitful frozen set with the lowest That is, there exists an (n, k, ³ d )
Figure imgf000025_0015
code for and p is as low as possible.
Figure imgf000025_0016
It is desirable to identify frozen set among sets
Figure imgf000025_0017
beforehand, i.e. prior to the
Figure imgf000025_0018
search for variable rows. This is equivalent to excluding from consideration all barren frozen sets. Note that in a case of code generation for all values of dimension k, barren frozen sets for ( n , K < k, ³ d) code are available prior to the generation of ( n , k, ³ d) code.
Figure imgf000026_0001
Thus, given a set B (n,k_1,³d), all frozen sets T: T a f e Uk7i B(n,K,³d) may be excluded from consideration during the design of (n, k, ³ d ) code. Note that T* may vary with SNR, while the set B (n,k,³d ) is independent from SNR and is determined by the polarizing transformation.
Complexity Reduction: Suboptimal Approach
Complexity of the proposed code design approach is acceptable for all codes of length 32. However, the search space is excessively large in case of a length that is greater than 32.
One of possible ways for low-complexity generation of a (n, k, ³ d ) code is to construct at first a (n, k < k, d ³ d) code and then generate k — k additional rows for its SC aimed precoding matrix. That is, to construct (n, k, ³ d ) with preselected (n, k, d ) subcode. This approach shows promising results even the existence of these k — k additional rows is uncertain.
Encoding Example
Fig. 4 shows an example of encoding process for (64, 25, 16) SC-ML precoded polar code, in accordance with the channel encoder illustrated by Fig. 3. The (64, 25, 16) SC-ML precoded polar code is generated by the SC-ML optimization algorithm (Algorithm 1), and the obtained code specification is further stored within channel encoder. Encoding of data 401 begins with splitting the data 401 (step 402) into information vectors 403 (a0, . . a24). Encoding step 405 is parameterized by the code selector, which provides parameters 404: code length n=64, dimension k=25, minimum distances d=16, and precoding matrix Z. Encoding with the precoding matrix Z in step 405 results in an input vector (iiQ, . . . , ¾63) 406, which consists mostly of information bits aL and zeros and includes only a few sums of information bits, since precoding matrices generated by the SC-ML optimization algorithm are extremely sparse. The input vector 406 is further encoded with the polarization transformation matrix (shown in step) 407 to obtain a codeword 408. The codeword 408 is modulated and transmitted by other components of a transmission system as illustrated by Fig. 1.
Numerical Results
Results are provided for the case of an AWGN channel with BPSK modulation, according to an embodiment of the present disclosure. Codes of length 32 and 64 are considered. Precoded polar codes constructed by the proposed SC-ML optimization algorithm are referred to as SC-ML precoded polar codes. A comparison is provided with the original polar codes and polar subcodes of e-BCH codes. The original polar codes are constructed by freezing n — k worst bit- subchannels; polar subcodes of e-BCH are obtained from narrow-sense primitive e-BCH codes by freezing a number of worst bit-subchannels. Note that a comparison with e-BCH and their polar subcodes is provided, since in case of short code lengths, e-BCH codes are one of the most powerful classes of binary linear codes with high minimum distances. Since binary narrow-sense primitive e-BCH codes exist only for some values of the code dimension, e.g. k = 6, 11, 16, 21, 26 for the code length n = 32 and k = 7, 10, 16, 18, 24, 30, 36, 39, 45, 51, 57 for the code length n = 64, codes of the present invention are compared with polar subcodes of e- BCH codes in case of intermediate values of the code dimension.
Tables I and II show performance estimates P}sc ^ and Pe (ML) characterizing SC decoding
(SC^) error probability and ML decoding error probability, respectively, where Pe is computed according to the expression as described above using a Gaussian approximation and p}ML^ is given by tangential-sphere bound according to a known art as described above. Estimates in Table I are provided for code length n = 32 and signal-to-noise ratio — = 4.75 dB. Estimates in Table II
N0 are provided for code length n = 64 and — = 3.75 dB. Values of — are selected so that FER
N0 N0 estimates are in range 10-4 — 10-2.
Columns n, k and d specify code length, dimension and minimum distance, respectively. The dimension changes with a step of 1. Tables I and II also show the lower and the upper bounds TABLE I
PERFORMANCE ESTIMATES FOR CODES OF LENGTH N= 32 CONSTRUCTED FOR
Figure imgf000028_0001
Figure imgf000028_0002
on the minimum distances of binary linear block codes provided by the code-tables. A code with the minimum distance meeting the upper bound is called optimal, while a code with the minimum distance meeting the lower bound is called best-known. It can be seen that in most cases, the proposed codes, as well as polar subcodes of e-BCH, have the minimum distance of the best- known codes.
Only ten values of k are shown for n = 32, since in the case of other k, values of d, p}sc^ and Pg ML ^ coincide for the considered 3 types of codes. Note that almost in all cases, the minimum distance of the proposed SC-ML precoded polar codes coincide with the minimum distance of polar subcodes of e-BCH, while the proposed codes provide a gain in the minimum distance for k = 25 and k = 26. It can be seen that the proposed code construction substantially outperforms polar subcodes of e-BCH in terms of P}sc ^ without sacrificing Pe (ML) . In case of n = 32, the weight enumerator polynomials coincide, so the estimates p}ML^ are the same. In case of n = 64, the weight enumerator polynomials coincide occasionally. Difference between them is characterized by the relative number of low weight codewords provided in Table II, where A prop [w] and AeBCH[w] are the numbers of codewords of weight w in the proposed code and the polar subcode of eBCH code, respectively. So, in case of length n = 64, the proposed SC-ML precoded polar codes and polar subcodes of eBCH codes demonstrate the same or similar Pg ML\ TABLE II
PERFORMANCE ESTIMATES FOR CODES OF LENGTH n = 64 CONSTRUCTED FOR— = 3.75
N0
Figure imgf000029_0001
Figure imgf000030_0001
The code construction as disclosed in the present disclosure enables one to balance the tradeoff between FER under SC decoding and ML decoding by selecting appropriate value for the target minimum distance. The higher list size of SCL-like decoder, the higher minimum distance should be selected. Simulation results for codes with the minimum distances similar to e-BCH codes are provided to enable comparison with e-BCH codes and their polar subcodes.
Two score functions were considered as described above. Both of them guarantee a global
(SC^) minimum of SC decoding error probability Pe . A score function ensuring a global minimum of
Pg Mi) for a given frozen set T was employed for all codes of length n = 32 and codes of length n = 64 with 16 values of k. For n = 64 and remaining values of k, a score function corresponding to local minimization of p}ML^ (greedy approach) was employed. According to obtained results, if one considers all existing codes with parameters (n, k, ³ d) corresponding the best frozen set T*, then these codes have the same
Figure imgf000031_0001
r only slightly different. One might expect the same values
Figure imgf000031_0002
be found by a greedy approach as in the case of global minimization.
An (n, k, ³ d ) SC-ML precoded polar code is (n, k, ³ d ) binary linear block code, which demonstrates the lowest possible SC decoding error probability. One may be also interested in minimizing t the price of slight increase in P^sc\ By considering extended set of frozen sets T in the Algorithm 1, in addition to results of Table I, a (32, 17, 6) code was also found with Pe (sc) = 0.0109 and Pe (Mi) = 0.00039; and (32, 18, 6) codes with Pe (sc) = 0.0158 and Pe (Mi) = 0.00048. According to the simulation results,
Figure imgf000031_0003
SC-ML precoded polar codes may be improved only negligibly, so results for precoded polar codes with the second, third and etc. lowest values of P}sc ^ are not necessarily presented.
It is understood that many e-BCH codes have the highest possible minimum distance, i.e. there are many optimal/best-known linear block codes among e-BCH codes. The e-BCH codes include Reed-Muller codes as their subcodes, while any Reed-Muller code may be represented as a pure polar code. Thus, according to the simulation results for codes of length 32, the obtained SC-ML precoded polar codes include Reed-Muller codes as their subcodes. The simulation results also confirmed the same property for the SC-ML precoded polar codes of the length 64 and the dimensions 8, 11, 12, 19, 20, 40, while the computational complexity is too high for the other dimensions. Parameters of Reed-Muller codes are as follows. In case of length 32, the SC-ML precoded polar codes with the minimum distances 12 and 6 contain (32, 6, 16) and (32, 16, 8) Reed-Muller codes, respectively. In case of length 64, the SC-ML precoded polar codes with the minimum distances higher than 16 and higher than 8 contain (64, 7, 32) and (64, 22, 16) Reed- Muller codes, respectively. One can assume that the same property holds also for the SC-ML precoded polar codes of length 64 and the remaining dimensions. Thus, to reduce computational complexity of the code design for length n = 64, search is performed only for precoded polar codes, which include Reed-Muller codes as their subcodes. Computational complexity for parameters (64, 17, 24) and (64, 27, 14) was further reduced by designing codes with preselected (64, 16, 24) and (64, 26, 14) subcodes, respectively. As described above, in a general case, SC- ML precoded polar codes with preselected subcodes are suboptimal. Results for codes of length 64 and dimensions 18 and 35 - 39 are not necessarily presented due to excessively high computational complexity. The SC-ML precoded polar codes and polar subcodes of e-BCH codes with the dimension lower than 8 or higher than 51 result in pure polar codes, so there is no interest in results for them.
Figs. 5 and 6 present simulation results illustrating the performance in terms of FER for wide range of the energy per bit to noise power spectral density ratio is a normalized
Figure imgf000032_0001
SNR measure, known as the SNR per bit. Decoding of all codes was performed by using a stack successive cancellation algorithm, which was shown to have a lower computational complexity than the Tal-Vardy SCL decoding algorithm, at the cost of a slightly inferior performance. Results are provided for the original polar codes, polar subcodes of e-BCH and SC-ML precoded polar codes of length n = 64 and dimensions k = 25 and k = 34 under sequential decoding with list size L = 4, 8, 16, 32, 64. Minimum distances of the codes are provided in the Table II. The SC- ML precoded polar codes outperform the original polar codes starting from L = 4 in the high-SNR region. It can be seen that the proposed codes significantly outperform polar subcodes of e-BCH up to L = 16, then the performance gets equal for higher L. The gain is ensured by the reduced SC decoding error probability (see Tables I and II). It should be understood that the SC-ML precoded polar codes may be also exploited by an adaptive SCL decoder, which progressively increases the list size L until a packet is successively decoded or a maximum L is reached. In a case of dimension k = 25, the code demonstrates an additional gain owing to a higher minimum distance. In case of dimension k = 34, the codes have the same minimum distances and slightly different weight enumerator polynomials. SCL decoding with an infinite list size corresponds to ML decoding. Sequential decoding with L = 32 approaches near ML decoding for both (64, 25) and (64, 34) codes. It can be seen that list size L = 64 provides a negligible performance improvement compared to L = 32. Thus, there is no need to further increase list size.
It should be understood that in the case of the original polar codes, curves coincide even for L = 4 and L = 8, and higher values of L do not provide any gain.
Decoding complexity
Performance and decoding complexity were investigated using software implementation in the C++ programming language. Results are provided for the case of an AWGN channel with BPSK modulation. Decoding complexity is measured as the average number of iterations required to perform decoding of a polar code using the SCL decoding algorithm or its variation. For example, decoding of a polar code of length n using the SC requires n iterations, while the SCL with list size L requires L n iterations. Fig. 7 and 8 illustrate results for (64, 25) and (64, 34) codes, respectively, under the stack SC decoding algorithm with list size L according to a known art. Fig. 9 and 10 illustrate results for (64, 25) and (64, 34) codes, respectively, under the sequential decoding algorithm with list size L according to a known art. The sequential decoding algorithm reduces decoding complexity compared to the stack SC at the cost of negligible performance degradation. Figs. 7-10 show that decoding of the SC-ML precoded polar codes requires a lower number of iterations than polar subcodes of e-BCH codes, and that the sequential algorithm reduces the average number of iterations. The stack SC and sequential decoding involve operations over floating point numbers and binary number, as well as operations related to priority queue. The decoding complexity is dominated by the floating point operations (FLOP). An iteration of the stack SC involves the following FLOP: multiplications, additions and comparisons, while an iteration of the sequential algorithm involves only additions and comparisons. The total number of FLOPs is lower for the sequential algorithm compared to the stack SC.
It should be understood that encoding complexity of precoded polar code is given by the sum of encoding with precoding matrix and encoding with polarization transformation matrix. Multiplication of a binary vector by the nxn polarization transformation matrix involves ndog2(n) binary operations, where n is the code length. The precoding matrix is extremely sparse, therefore multiplication by the precoding matrix leads to minor increase in encoding complexity.
According to an embodiment of the present disclosure, a code design algorithm configured to minimize both successive cancellation (SC) and maximum-likelihood (ML) decoding error probabilities are introduced. The algorithm performs optimization of a precoding matrix for the polarization transformation matrix to guarantee preselected minimum distance and the lowest possible SC decoding error probability to the resulting code. Constructed short length precoded polar codes, decoded as polar codes with dynamic frozen bits, demonstrate a significant advantage compared to the original polar codes, extended Bose-Chaudhuri-Hocquenghem (BCH) codes and their polar subcodes under SCL-like decoding.
According to an embodiment of the present disclosure, a SC-ML algorithm was introduced to generate ( n = 2m, k, ³ d) precoded polar codes, which have the lowest possible SC decoding error probability under the constraint on the minimum distance specified by the preselected value d. Constructed precoded polar codes are decoded as polar codes with dynamic frozen bits. Experimental results show that the proposed codes of lengths 32 and 64 significantly reduce the FER compared to the original polar codes under the stack/list SC decoding starting with the list size 4. The gain increases with the list size. The codes have a much lower SC decoding error probability than e-BCH codes and their polar subcodes without sacrificing the error probability under maximum-likelihood decoding. In case of the code length 64, the constructed precoded polar codes demonstrate a significant advantage compared to the polar subcodes of e-BCH codes under stack/list SC decoding with the list size up to 16 and demonstrate comparable performance starting with the list size 32.
According to an embodiment of the present disclosure, the SC-ML precoded polar codes may be used individually, as well as component codes in some concatenated constructions. For example, they may be employed to design polar codes with large kernels. Code length of the SC- ML precoded polar codes with 2x2 kernel can be reduced by an arbitrary number of bits by applying puncturing or shortening techniques. Alternatively, the SC-ML optimization algorithm can be employed to design precoded multi-kernel polar codes, where 2x2, 3x3 and 5x5 kernels are used to obtain codes of lengths 15, 16, 18, 20, 24, 27, 30, 32, 36, 40, 45, 48, 54, 60, 64 and so on. According to an embodiment of the present disclosure, the SC-ML precoded polar codes can be represented as a concatenation of an outer code and an inner polar code. The outer code is specified by the precoding matrix. The joint optimization of the precoding matrix together with the set of frozen bits is performed to minimize both SC decoding error probability and ML decoding error probability.
According to an embodiment of the present disclosure, the SC-ML precoded polar codes may be employed in various data transmission systems and storage systems. For example, they may be employed to channel coding for 5G and beyond 5G networks with a focus on ultra-reliable low-latency communications (URLLC), data encoding method for Redundant Arrays of Independent Disks (RAID) and for Flash memory, satellite communications, digital television and Wireless Local Area Network (WLAN).
Example Algorithm for Polar Codes of Moderate Lengths
In some embodiments, a method may be used to recursively construct a collection of precoded polar codes of moderate lengths, e.g. 128 and 256. The proposed code design method determines a collection C of precoded polar codes of various lengths, where each (n, k) code C* 6 C satisfies the following criteria:
1) The minimum distance of C* is at least Dn k ;
2) There exists a supercode C+ of code C*, i.e., C+ 3 C*, such that a) The number of codewords of weight below Dn k in code C+ is less than a specified limit kmax; and b) Either C+ has rate 1 and n < log2(Amax), or C+ can be represented as the Plotkin sum of two nested codes of length n/2 belonging to C;
3) Either k < Tmax, or there exists a subcode C~ of code C*, i.e. C~ a C*, such that a) C~ 6 C; and b) The dimension of code C~ is at least k — Tmax; 4) Frozen set optimality: frozen set T* of C* satisfies the condition
Figure imgf000036_0001
Pjfc) (T) for all frozen sets T corresponding to ( n , k, ³ Dn k) codes with subcode C~ and supercode C+;
5) Code C* satisfying Criteria 1-4 can be constructed with the computational complexity limited by ipmax, assuming that C~ and C+ are known; and
6) Code C* ensures the lowest number of minimum weight codewords among a group of zghac codes generated in accordance with Criteria 1-5.
The minimum distance Dn k is selected prior to the code design, and resource limits Tmax, Amax, ipmax ar|d zthac have constant values.
It can be seen from Criteria 1-6 that the code design problem is a constrained optimization problem. Since the frame error rate (FER) for the SCL decoding cannot be calculated analytically,
~(SC') the main objective function is FER estimate for the SC decoding Pe (see Criterion 4), which is
(.sc' calculated in the same way as Pe ) above. In addition, minimization of the number of minimum weight codewords is specified by Criterion 6. Criterion 1 provides a constraint on the minimum distance. An appropriately selected value of the minimum distance provides a tradeoff between the SC and ML performance.
The proposed choice of supercodes and subcodes leads to a recursive structure and many nested codes in the resulting code collection C. It is known that the original polar codes are naturally nested. In Criterion 3, a proper difference between dimensions of C* and C~ provides significant reduction in the code design complexity, and sufficient freedom to achieve the required
~(sc' minimum distance and low Pe ) . Criterion 5 specifies a limit on the acceptable computational complexity. Criterion 2 emerges due to a similarity between the polar code structure and the Plotkin sum, as well as an opportunity to enumerate low-weight codewords, in accordance with Proposition 1 below.
This section provides a detailed description of the proposed code design method. The codes are constructed recursively, i.e. codes of length n = 2m are based on codes of length 2m_1, where codes of length 2771-1 are based on codes of length 2m~2 and etc. As a starting point, m = 4 is used, since the number of codewords in any binary code of length 16 is low. The proposed code approach is summarized by Algorithms 3-6 and Proposition 1 discussed below. Each code C is characterized by a quartet (Z, d, o)LasL, A), where Z is the SC aimed precoding matrix, d is the minimum distance, L is a list of all codewords of the weight lower than (Oiast- Note that code C is fully specified by Z, while other elements of the quartet are needed for code design purposes. Given code C, elements of the corresponding quartet (Z, d, wiast, A) are addressed as Z (C), d(C), wlast(C), and A(C). Similarly, the length, dimension and frozen set of code C are addressed as n(C), k(C) and T(C), respectively.
1) The Main Loop: The main constructive steps are performed by function CodeCollection depicted in Algorithm 3 below. It generates a collection of ( n = 2m, k, d = Dn kj precoded polar codes for 1 < k < n and ninit < n < nmax where the initial length nmit, the maximum length nmax and the minimum distances Dn k are selected prior to code design together with resource limits kmax, Tmax, ipmax, zpiac- The resource limits restrict memory consumption and computational complexity.
Lines 3-11 of Algorithm 3 below correspond to designing codes of initial length ninit. A supercode is used to construct these codes of length ninit, and thus precoding matrix Z+ is initialized by identity matrix at line 5 and list A+ of codewords of C+ is given by all binary vectors of length ninit. The absence of subcode C~ is indicated by precoding matrix Z~ = 0 at line 4. A family of codes of length ninitis initialized at line 8, and then enlarged at line 10, where additional precoded polar codes are generated by function Precoding depicted in Algorithm 4 below.
Codes of length n = 2 ninit, ... , nmax are generated at lines 12-34. First, a set CI_Q) of codes being equal to the Plotkin sum of codes from Cn is computed at lines 14-18, where pairs of codes from Cn are selected by function SelectOuterCodes and concatenated by function Plotkin described below. Second, new (n, k, Dn k) codes are iteratively introduced into set C « at lines 19-33. A set C+ of possible supercodes is formed at line 20, where a check is performed that the dimension of each supercode is at least k and that the supercode codeword list contains all codewords of weight up to Dn k - 1.
Figure imgf000038_0004
A set C- of subcodes corresponding to a particular supercode
Figure imgf000038_0001
is constructed at line 23, where the compliance with the minimum distance and nested conditions is ensured, and the dimension of subcodes is lower bounded by to limit the computational complexity
Figure imgf000038_0002
of function Precoding. Note that set Cn includes
Figure imgf000038_0003
as an element, which is needed to consider the case of absent subcode. The selection of a subcode with the lowest dimension at line 26 corresponds to maximization of the number of precoding matrix rows to be generated by function Precoding, i.e. a high degree of freedom at the cost of a high computational complexity. At most one ( n , k, Dn k) code C is constructed for each supercode C~ at lines 24-29, where attempts to generate code C are performed by function Precoding. The obtained codes C are united into a set C at line 30, and a single code minimizing SC decoding error probability is selected at line 32.
2) Increasing the Code Dimension: Function Precoding of Algorithm 4 generates an (n, k, d) code C for given ( n, k~ < k, d~ ³ d) subcode C~ and ( n, k+ ³ k, d+ ) supercode C+, C~ a C+. If code C does not exist or the code design complexity is prohibitively high, then a failure to generate a code is declared by returning an empty set at line 35. The computational complexity is measured as the number ip of iterations of the while loop, and it is limited by dmax, which is provided as an input argument. Note that ip also gives the number of calls of GetEligibleRows, which is the most computationally expensive function in Algorithm 4. At line 4, k+ x n precoding matrix Z+ of code C+ is decomposed into the vertical concatenation (S; Z~) of k~ x n precoding matrix Z~ of code C~ and (/c+ — k~) x n matrix S. Note that a such decomposition always exists, since C~ c C+. Obtained matrix S is transformed into the SC aimed precoding matrix by applying Gaussian elimination at line 5. Matrix Z~ gives k~ rows of the precoding matrix for code C, while k — k~ additional rows of Z are generated as linear combinations of rows of S. A set H of non-eligible linear combinations is constructed from a list A of supercode codewords with the weight below d. More specifically, function ExtractNonEquivPrecodingRows at line 8 returns set R consisting of all distinct binary vectors u' of length (/c+ — k~ ) satisfying the following condition:
Figure imgf000039_0001
Figure imgf000040_0004
Note that the cardinality of set R is much less than of set L, and the difference increases with k-. Obviously, the cardinality of R is upper bounded by
Figure imgf000040_0003
.
At lines 14-33 of Algorithm 4, the additional rows Z of precoding matrix are
Figure imgf000040_0001
Figure imgf000040_0002
constructed, where the rows are generated in reverse order of their indices to reduce the code design complexity. Matrix Z' , together with information index β of the next row, is referred to as a path. Paths and their scores are kept in a priority queue Q. Lines 9-12 specify initialization of Q by paths (0, b ) with their respective scores P for all b 6
Figure imgf000041_0001
is the information set corresponding to a code defined by precoding matrix S. Information set c/Z® is constructed by function GetInformationSet(S) at line 6. Note that any code is characterized by a single information set, as well as a single frozen set.
The path score is essentially an ordered pair. A path with the lexicographically lowest score is identified by function PopMin, called at line 15. The lexicographical ordering for ordered pairs is defined by (a0, af) < (b0, bf) iff
Figure imgf000041_0002
The score for a partially constructed precoding matrix Z' and parameter b is computed by function Score as an ordered pair (p<:sc\ IT'D, where set T is obtained from the frozen set of precoding matrix Z' by excluding b, and p(sc> is an estimate of FER under the SC decoding for an (n, k, d) code computed as:
Figure imgf000041_0003
Figure imgf000041_0004
above. Therefore, function PopMin explores paths minimizing pG°) and selects among them the longest path to ensure a fast convergence of the function Precoding.
The search for new rows continues until z-max matrices Z' consisting of k — k~ rows are found, i.e. z-max matrices Z' pass if condition at line 21 , or the condition of the while loop is violated at line 14. Note that (max is provided as an input argument to function Precoding. At line 21, function RowNum returns the number of rows in a given matrix. RowNum(Z~ ) is always equal to k~. At line 22, a (/c — k~ ) x n matrix Z' is vertically concatenated with Z~ and Gaussian elimination is further applied to obtain SC aimed precoding matrix Z. New matrix Z is added to the set Z of candidate precoding matrices. As soon as the cardinality of set Z reaches z-max, the function SelectSingleCode, shown above in Algorithm 5, is called to identify a single resulting (n, k, d) code. The resulting code may be alternatively selected at line 34 from set Z of a smaller size, which happens in case of excessively high computational complexity or empty priority queue.
A path (Z, /?) is extended at lines 16-32. The path extensions are constructed in accordance with the structure of SC aimed precoding matrix. Therefore, the first “1” in a row of Z corresponds to an information row of A®m, and the remaining “l”s correspond to frozen rows of A®m with higher indices, where the z-th row of is referred to as information row if
Figure imgf000042_0001
Figure imgf000042_0002
GetInformationSet
Figure imgf000042_0003
and frozen row otherwise. Obviously, the SC aimed structure is not violated at the first iteration of the while loop since
Figure imgf000042_0004
. The function GetEligibleRows at line 17 generates a set z of possible new rows z satisfying the following conditions:
1) Row z begins in column and
Figure imgf000042_0005
Zj = 0 for i 6 GetInformationSet{Z) ;
2) Row z is a linear combination of rows of S; and
3) Any linear combination of rows of (z; Z ) does not belong to 72.
The first condition ensures the SC aimed structure, defined by the equation above, to the matrix . The second condition guarantees that the resulting code is a subcode of C+. The third
Figure imgf000042_0006
condition is needed to achieve minimum distance d. It can be seen that precoding matrix Z' is characterized by information set constructed at line 16.
Figure imgf000042_0010
The next row, which could be appended to matrix Z' at subsequent iterations of the while loop, should begin at a column with index b lower than <A
Figure imgf000042_0009
to preserve the structure of the SC aimed precoding matrix. It is also required that
Figure imgf000042_0007
\ since the desirable code must be a subcode of C+. All possible such indices b are united into set Έ at line 26. Path extended by
Figure imgf000042_0008
(z, b) is saved into the priority queue at line 29.
Figure imgf000042_0011
Function SelectSingleCode(Z, d, wfast, A+), depicted in Algorithm 5, selects a code with precoding matrix Z* 6 Z such that it minimizes the number of codewords of weight d, i.e. the number of minimum weight codewords. At first, list A^ of supercode codewords of weight d is extracted at line 3. Then, codewords of weight d, belonging to the code defined by Z 6 Z, are identified using check matrix if constructed at lines 6-7. Desirable precoding matrix Z* is gradually improved at line 10. As soon as the final Z* is found, a whole list A* of low-weight codewords is computed at lines 13-14 and function SelectSingleCode successfully completes its work by returning a full specification of the resulting code.
3) Doubling the Code Length: This Section describes functions SelectOuterCodes and Plotkin called at lines 14 and 17 of Algorithm 3, respectively. Function Plotkin employs the Plotkin sum (6) to obtain code C of length n = 2771 based on codes
Figure imgf000043_0001
a
C^. The Plotkin sum is applied in accordance with the GCC representation of precoded polar codes. Let Z® and Z(1) be precoding matrices of codes
Figure imgf000043_0002
and C(1), respectively. Then, the precoding matrix of code C can be computed as
Figure imgf000043_0003
since the generator matrix of the Plotkin sum of C® and
Figure imgf000043_0004
is given by
Figure imgf000043_0005
Note that Z® ^®!771-1), Z<'1>' ■
Figure imgf000043_0006
and z A®m are the generator matrices of C
Figure imgf000043_0007
>' and
C, respectively. Low-weight codewords of the resulting code C are enumerated in accordance with Proposition 1.
Note that a known method to enumerate the minimum weight codewords of polar codes using the Plotkin sum has been recently proposed. As opposed to these known methods, Proposition 1 of this paper enables a processor to explicitly enumerate not only the minimum weight codewords, but all codewords of weight up to a certain limit. Moreover, the known methods assume that the component codes of the Plotkin sum are pure polar codes, while Proposition 1 disclosed herein requires only that the component codes are nested.
Figure imgf000044_0003
Proof: It is shown that any codeword
Figure imgf000044_0004
of the weight lower than wlast = can be represented via elements
Figure imgf000044_0005
Note that the zero codeword
Figure imgf000044_0001
Figure imgf000044_0006
is included into
Figure imgf000044_0007
. According to the equation above, codeword c can be represented
Figure imgf000044_0002
can be seen that
Figure imgf000044_0008
Since last, one obtains meaning that
Figure imgf000044_0011
®. Besides, it
Figure imgf000044_0009
Figure imgf000044_0010
follows from In the former case , while in the latter case Note that \ since
Figure imgf000044_0012
Figure imgf000044_0013
. This concludes the proof.
Function Plotkin, called at line 17 of Algorithm 3, returns quartet for the
Figure imgf000044_0014
code C, where matrix Z is calculated according to equation above,
Figure imgf000044_0016
, and A is computed in accordance with Proposition 1.
Figure imgf000044_0015
Function Plotkin generates all distinct codewords c satisfying Proposition 1 to produce a list L of low-weight codewords for code C. Whenever the cardinality of L approaches macro parameter is decremented and then and
Figure imgf000044_0017
Figure imgf000044_0018
are updated accordingly to include only codewords of weight below
Figure imgf000044_0019
respectively. Therefore, the amount of memory required to store low-weight codewords of C is upper bounded by bits. The minimum distance of C is computed as described above.
Figure imgf000044_0020
Figure imgf000045_0011
Input data for function Plotkin is provided by function SelectOuterCodes, which is depicted in Algorithm 6. Specifically, function SelectOuterCodes selects a subset where
Figure imgf000045_0004
is a family of codes of length n/2. Pairs of component codes for the Plotkin sum with 2
Figure imgf000045_0005
parameters {n, k) are selected independently for all k = 1, ..., n. Code pairs for each particular k are considered at lines 5-7 of Algorithm 6. At line 5, a set of all pairs of nested component
Figure imgf000045_0006
codes with the overall dimension k is selected. As shown above, the minimum distance of the Plotkin sum of codes C® and C^1) can be computed as min (d(C®), 2 d(c60) In accordance with this rule, a set of code pairs, maximizing the minimum distance of the Plotkin sum, is identified at line 6 is further employed at line 7 to obtain
Figure imgf000045_0001
minimizing the SC
Figure imgf000045_0007
decoding error probability estimate where notation
Figure imgf000045_0002
Figure imgf000045_0003
The resulting set of selected code pairs for
Figure imgf000045_0008
all k is provided as the output at line 9.
Numerical Results for the Example Algorithm for Polar Codes of Moderate Lengths
Binary phase-shift keying (BPSK) modulation over AWGN channel has been used to investigate the code performance discussed above. Precoded polar codes of lengths 128 and 256 are constructed as described above with macro parameters:
Figure imgf000045_0009
Figure imgf000045_0010
. The e-BCH polar subcodes and polar codes with CRC from 5G New Radio standard are used for comparison. An Uplink scenario was considered for 5G New Radio polar codes, where length of CRC code is 11. Polar codes with CRC constructed for a downlink scenario demonstrate poor performance, since CRC length 24 is too high for polar code lengths 128 and 256. The rate of polar codes with CRC was computed as the ratio of the difference between the polar code dimension and CRC length to the polar code length.
The e-BCH polar subcodes are constructed as described above. Note that e-BCH codes with different minimum distances could be employed when designing ( n , k ) polar subcodes. E- BCH codes were selected minimizing FER of the resulting polar subcodes under SCL with list size L = 64.
TABLE III: Code Parameters
Figure imgf000046_0001
Figs. 11 and 12 illustrate the FER performance of the proposed codes and e-BCH subcodes listed in Table III and 5G New Radio polar codes with CRC. Values of the minimum distance d for the proposed codes have been chosen for SCL decoding with L=64 by considering that increasing d leads to decreasing ML decoding error probability at the expense of increasing SC decoding error probability. Minimum distances of the original polar codes are provided in Table III for a comparison. Decoding of all codes was performed using the SCL algorithm. It can be seen from Figs. 11 and 12 that the proposed codes outperform New Radio polar codes with CRC on average by 0.2 dB at FER PG5 for SCL with L=8, 16, 32, 64, 128. It should be understood that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.

Claims

CLAIMS The invention is claimed as follows:
1. A method of data encoding comprising: dividing payload data to be transmitted into a set of information vectors; multiplying each information vector of the set of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector, wherein the information vector includes k bits, n is a power of 2, n>k, and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d; and multiplying each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword, wherein the codeword has a Hamming weight of 0 or at least d.
2. The method of claim 1 , wherein the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under two constraints.
3. The method of claim 2, wherein the two constraints include a first constraint on the minimum distance d and a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode.
4. The method of claim 1, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for parameters n, k, and d.
5. The method of claims 1 or 4, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for a fixed successive cancellation decoding error probability.
6. The method of claims 1 or 5, wherein the kxn binary precoding matrix includes a kxk identity matrix in k columns corresponding to k non-frozen bits of a polar code and each row of the kxn binary precoding matrix begins in a column corresponding to a non- frozen bit.
7. The method of claims 1 or 6, wherein the kxn binary precoding matrix is configured to minimize the successive cancellation decoding error probability under three constraints.
8. The method of claim 7, wherein the three constraints include a first constraint on the minimum distance d, a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode, and a third constraint that the code is a subcode of a preselected supercode.
9. The method of claim 8, wherein the kxn binary precoding matrix is such that the number of low-weight codewords is minimized for parameters n, k, d, a preselected subcode, a preselected supercode, and a fixed successive cancellation decoding error probability.
10. An encoding apparatus for a wireless communication system comprising: a memory storage configured to store instructions; and one or more processors in communication with the memory storage, wherein the one or more processors are configured to execute the instructions to: divide payload data to be transmitted into a set of information vectors, multiply each information vector of the set of information vectors by a kxn binary precoding matrix to obtain a corresponding input vector, wherein the information vector includes k bits, n is a power of 2, n>k, and the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under a constraint on a minimum distance d, and multiply each input vector including n bits by an nxn polarization transformation matrix to obtain a corresponding codeword, wherein the codeword has a Hamming weight of 0 or at least d.
11. The apparatus of claim 10, wherein the kxn binary precoding matrix is configured to minimize successive cancellation decoding error probability under two constraints, and wherein the two constraints include a first constraint on the minimum distance d and a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode.
12. The apparatus of claim 10, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for parameters n, k, and d.
13. The apparatus of claims 10 or 12, wherein the kxn binary precoding matrix is such that a maximum likelihood decoding error probability is minimized for a fixed successive cancellation decoding error probability.
14. The apparatus of claim 12, wherein the kxn binary precoding matrix includes a kxk identity matrix in k columns corresponding to k non-frozen bits of a polar code and each row of the kxn binary precoding matrix begins in a column corresponding to a non- frozen bit.
15. The apparatus of claims 10 or 14, wherein the kxn binary precoding matrix is configured to minimize the successive cancellation decoding error probability under three constraints.
16. The apparatus of claim 15, wherein the three constraints include a first constraint on the minimum distance d, a second constraint that a code, defined by the precoding matrix and the polarization transformation matrix, has a preselected subcode, and a third constraint that the code is a subcode of a preselected supercode.
17. The method of claim 16, wherein the kxn binary precoding matrix is such that the number of low-weight codewords is minimized for parameters n, k, d, a preselected subcode, a preselected supercode, and a fixed successive cancellation decoding error probability.
PCT/AU2021/050438 2020-05-12 2021-05-12 "channel encoding method and a channel encoder" WO2021226665A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2020901517 2020-05-12
AU2020901517A AU2020901517A0 (en) 2020-05-12 Channel encoding method and a channel encoder

Publications (1)

Publication Number Publication Date
WO2021226665A1 true WO2021226665A1 (en) 2021-11-18

Family

ID=78525844

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2021/050438 WO2021226665A1 (en) 2020-05-12 2021-05-12 "channel encoding method and a channel encoder"

Country Status (1)

Country Link
WO (1) WO2021226665A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190081731A1 (en) * 2016-05-12 2019-03-14 Huawei Technologies Co., Ltd. Devices and methods implementing polar codes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190081731A1 (en) * 2016-05-12 2019-03-14 Huawei Technologies Co., Ltd. Devices and methods implementing polar codes

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BOLCAR LAUREN: "On the weights of linear codes and their dual", FALL, 1 January 2019 (2019-01-01), pages 1 - 28, XP055872403 *
TRIFONOV PETER: "Star Polar Subcodes", 2017 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), IEEE, 19 March 2017 (2017-03-19), pages 1 - 6, XP033093163, DOI: 10.1109/WCNCW.2017.7919043 *
TRIFONOV PETER; MILOSLAVSKAYA VERA: "Polar codes with dynamic frozen symbols and their decoding by directed search", 2013 IEEE INFORMATION THEORY WORKSHOP (ITW), IEEE, 9 September 2013 (2013-09-09), pages 1 - 5, XP032536315, ISBN: 978-1-4799-1321-3, DOI: 10.1109/ITW.2013.6691213 *
TRIFONOV PETER; MILOSLAVSKAYA VERA: "Polar Subcodes", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY., US, vol. 34, no. 2, 1 February 2016 (2016-02-01), US , pages 254 - 266, XP011593857, ISSN: 0733-8716, DOI: 10.1109/JSAC.2015.2504269 *
TRIFONOV PETER; TROFIMIUK GRIGORII: "A randomized construction of polar subcodes", 2017 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), IEEE, 25 June 2017 (2017-06-25), pages 1863 - 1867, XP033140413, DOI: 10.1109/ISIT.2017.8006852 *

Similar Documents

Publication Publication Date Title
US11362682B2 (en) Encoding method and apparatus using CRC code and polar code
US10361718B2 (en) Apparatus and method for encoding with cyclic redundancy check and polar code
EP1525664B9 (en) Method and system for memory management in low density parity check (ldpc) decoders
CN109314600B (en) System and method for rate matching when using generic polarization codes
CN109314524B (en) System and method for rate matching through heterogeneous kernels using common polar codes
KR102061938B1 (en) Elementary check node processing for syndrome computation for non-binary ldpc codes decoding
JP5911188B2 (en) Low-density parity code encoding apparatus and method, and computer-readable recording medium
CN107979445B (en) Syndrome decoding based on basic check nodes using pre-ordered inputs
KR101895164B1 (en) Methods and devices for error correcting codes decoding
KR20230079159A (en) Serial concatenated code with outer block code and inner polarization-adjusted convolutional code
Miloslavskaya et al. Design of short polar codes for SCL decoding
US20050210358A1 (en) Soft decoding of linear block codes
CN110999092B (en) Reduced pre-ordered syndrome-based Extended Minimum Sum (EMS) decoding of non-binary LDPC codes
Miloslavskaya et al. Recursive design of precoded polar codes for SCL decoding
EP3713096B1 (en) Method and device for decoding staircase code, and storage medium
Jamali et al. Low-complexity decoding of a class of Reed-Muller subcodes for low-capacity channels
WO2021226665A1 (en) &#34;channel encoding method and a channel encoder&#34;
CN111034055A (en) Simplified check node processing in non-binary LDPC decoders
US9189321B2 (en) Error correction encoding method, decoding method and associated devices
Nouh et al. Efficient serial concatenation of symbol by symbol and word by word decoders
Ullah et al. Performance improvement of multi-stage threshold decoding with difference register
CN112889221A (en) Offset value determination in check node processing units for message passing decoding of non-binary codes
Zhilin et al. Generalized error locating codes with soft decoding of inner codes
Liang et al. Rateless transmission of polar codes with information unequal error protection
CN116783826A (en) Self-isomorphic based polarization encoding and decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804738

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804738

Country of ref document: EP

Kind code of ref document: A1