WO2020122749A1 - Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé - Google Patents

Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé Download PDF

Info

Publication number
WO2020122749A1
WO2020122749A1 PCT/RU2018/000819 RU2018000819W WO2020122749A1 WO 2020122749 A1 WO2020122749 A1 WO 2020122749A1 RU 2018000819 W RU2018000819 W RU 2018000819W WO 2020122749 A1 WO2020122749 A1 WO 2020122749A1
Authority
WO
WIPO (PCT)
Prior art keywords
vector
code
processor
codes
concatenated
Prior art date
Application number
PCT/RU2018/000819
Other languages
English (en)
Inventor
Ruslan Failevich GILIMYANOV
Mikhail Sergeevich KAMENEV
Jie Jin
Vladimir Vitalievich GRITSENKO
Aleksei Eduardovich MAEVSKIY
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to CN201880100216.1A priority Critical patent/CN113196671B/zh
Priority to PCT/RU2018/000819 priority patent/WO2020122749A1/fr
Publication of WO2020122749A1 publication Critical patent/WO2020122749A1/fr

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes

Definitions

  • the present disclosure relates generally to data encoding and decoding techniques, and in particular, to an apparatus and method for obtaining a concatenated code structure used in the data encoding and decoding techniques, as well as to a corresponding computer program product.
  • Polar codes are known for their ability to achieve the capacity of symmetric discrete memoryless channels with an explicit construction and a computationally efficient Successive Cancellation (SC), SC List (SCL), or Cyclic Redundancy Check (CRC)-aided SCL decoding algorithm.
  • SC Successive Cancellation
  • SCL SC List
  • CRC Cyclic Redundancy Check
  • the basic idea of the polar codes is to present a physical communication channel as a plurality of polarized bit channels, and transmit information bits only on those bit channels which are almost noiseless, i.e. have channel capacities tending to 1 as a code length increases, while frozen bits are transmitted on the remaining (noisy) bit channels having capacities tending to 0 as the code length increases. Given this, the construction of the polar codes involves finding such almost noiseless channels based on the channel capacities.
  • an apparatus for obtaining a concatenated code structure comprises at least one processor and a memory coupled to the at least one processor.
  • the memory stores processor-executable instructions which, when executed by the at least one processor, cause the at least one processor to receive input data comprising: an outer code length T, where T is a positive integer; a desired length N of a concatenated code consisting of outer codes and inner polar codes, where N— T 2 n £ N max , and N max is a maximal length of the concatenated code, 2 n is a polar code length, and n is a positive integer; and a concatenated code dimension K indicative of a number of information bits.
  • the at least one processor is instructed to calculate a vector of outer code dimensions based on the input data, and in accordance with (i) a predefined bit index sequence corresponding to the outer code length T or (ii) capacities of polarized bit channels.
  • Each vector component of the vector of outer code dimensions represents a part
  • the at least one processor is instructed to determine generator matrices for the inner polar codes in accordance with the input data. After that, the at least one processor is instructed to obtain the concatenated code structure based on the lengths T and N, the vector of outer code dimensions, and the generator matrices for the inner polar codes.
  • the concatenated code structure thus obtained enables the construction of concatenated codes having a code length unnecessarily equal to the power of two, a low decoding latency and better error correction performance, as well as the simplification of the code construction process itself, thereby saving system resources.
  • the outer codes comprise linear block codes.
  • the at least one processor is further configured to determine generator matrices for the outer codes based on the length T and the vector of outer code dimensions, and to obtain the concatenated code structure based on the lengths T and N, the vector of outer code dimensions, the generator matrices for the outer codes, and the generator matrices for the inner polar codes.
  • the concatenated code dimension K is set by using Cyclic Redundancy Check (CRC) bits. This provides better error correction performance and increased flexibility in the use of the concatenated code structure obtained by the apparatus according to the first aspect.
  • CRC Cyclic Redundancy Check
  • the predefined bit index sequence has a length N se q ue nce T 2 n,nax ⁇ N max .
  • 2 ri nax is the maximal polar code length
  • the predefined bit index sequence comprises a permutation of bit indices (l,2, ... , N sequence ).
  • N assigning one to each vector component U j of the information bit masking vector u, whose index occurs among the first K bit indices of the reduced bit index sequence; calculating the part K t of the information bits intended for each outer code from the information bit masking vector as follows:
  • the at least one processor is configured to calculate the vector of outer code dimensions by: removing, from the predefined bit index sequence, the bit indices greater than N /T to obtain a reduced bit index sequence; initializing the vector of outer code dimensions with N/T zero vector components /f, ⁇ ; and increasing each vector component K t by one each time when its index occurs among the first K bit indices of the reduced bit index sequence. This allows allocating the information bits to the outer codes of the concatenated code quickly and efficiently.
  • N/T obtaining a sum of all parts Ki of the information bits intended for the outer codes; determining whether the obtained sum is equal to K; if the obtained sum is less/greater than K, respectively: a) finding the index i such that a difference ⁇ K t — if j ) is maximal/minimal, b) adding one to/subtracting one from the respective part K while ensuring that the part K t after said adding/subtracting satisfies the following condition: 0 £ Ki £ T, and c) performing operations a)-b) until the obtained sum is equal to K.
  • This allows allocating the information bits to the outer codes of the concatenated code quickly and efficiently.
  • the concept disclosed in this paragraph is suited to calculate the predefined bit index sequence itself, as will be explained further in the detailed description.
  • the memory is configured to store the predefined bit index sequence in advance, and the at least one processor is further configured to retrieve the predefined bit index sequence from the memory after receiving the input data.
  • the predefined bit index sequence is generated by: taking, as the number of information bits, all possible values K from the range (l,2, ... , N sequence ) calculating the vectors of outer code dimensions by using each K; changing the vectors of outer code dimensions such that for each pair of neighboring values K and K— 1, the vectors of outer code dimensions differ from each other only by one vector component; and generating the predefined bit index sequence by using the indices of such vector components. This allows reducing the time and system resources required to obtain the concatenated code structure.
  • an information encoding apparatus comprises at least one processor and a memory coupled to the at least one processor.
  • the memory stores processor-executable instructions which, when executed by the at least one processor, cause the at least one processor to: receive a vector of K information bits; receive the concatenated code structure obtained by the apparatus according to the first aspect; and encode the vector of K information bits by using the concatenated code structure.
  • an information decoding apparatus comprises at least one processor and a memory coupled to the at least one processor.
  • the memory stores processor-executable instructions which, when executed by the at least one processor, cause the at least one processor to: receive a channel output comprising information bits encoded by using the concatenated code structure obtained by the apparatus according to the first aspect; receive the concatenated code structure itself; and retrieve the information bits from the received channel output by using the concatenated code structure.
  • the at least one processor is further configured to retrieve the information bits by decoding the inner polar codes in parallel. This allows significantly reducing the decoding latency.
  • the method further comprises the step of calculating a vector of outer code dimensions based on the input data, and in accordance with (i) a predefined bit index sequence corresponding to the outer code length T or (ii) capacities of polarized bit channels.
  • the method further comprises the steps of determining generator matrices for the inner polar codes in accordance with the input data.
  • the next step of the method consists in obtaining the concatenated code structure based on the lengths T and N, the vector of outer code dimensions, and the generator matrices for the inner polar codes.
  • the concatenated code structure thus obtained enables the construction of concatenated codes having a code length unnecessarily equal to the power of two, a low decoding latency and better error correction performance, as well as the simplification of the code construction process itself, thereby saving system resources.
  • the outer codes comprise linear block codes.
  • the method further comprises the step of determining generator matrices for the outer codes based on the length T and the vector of outer code dimensions. Given this, the step of obtaining the concatenated code structure is performed based on the lengths T and N, the vector of outer code dimensions, the generator matrices for the outer codes, and the generator matrices for the inner polar codes. This provides increased flexibility in the use of the method according to the fourth aspect because it allows using different types of the outer codes.
  • the concatenated code dimension K is set by using Cyclic Redundancy Check (CRC) bits. This provides better error correction performance and increased flexibility in the use of the concatenated code structure obtained by the method according to the fourth aspect.
  • CRC Cyclic Redundancy Check
  • N assigning one to each vector component U j of the information bit masking vector u, whose index occurs among the first K bit indices of the reduced bit index sequence; calculating the part K t of the information bits intended for each outer code from the information bit masking vector as follows:
  • the step of calculating the vector of outer code dimensions comprises: removing, from the predefined bit index sequence, the bit indices greater than N/T to obtain a reduced bit index sequence; initializing the vector of outer code dimensions with N /T zero vector components K]: and increasing each vector component K t by one each time when its index occurs among the first K bit indices of the reduced bit index sequence. This allows allocating the information bits to the outer codes of the concatenated code quickly and efficiently.
  • the method further comprises the step of obtaining the predefined bit index sequence after said receiving the input data.
  • the predefined bit index sequence is generated by: taking, as the number of information bits, all possible values K from the range (l,2,— , N sequence ) calculating the vectors of outer code dimensions by using each K ; changing the vectors of outer code dimensions such that for each pair of neighboring values K and K— 1, the vectors of outer code dimensions differ from each other only by one vector component; and generating the predefined bit index sequence by using the indices of such vector components. This allows reducing the time and system resources required to obtain the concatenated code structure.
  • an information encoding method comprises the steps of: receiving a vector of K information bits; receiving the concatenated code structure obtained by the method according to the fourth aspect; and encoding the vector of K information bits by using the concatenated code structure. This allows making the encoding process more efficient and less resource-intensive, as well as providing better error correction performance.
  • an information decoding method comprises the steps of: receiving a channel output comprising information bits encoded by using the concatenated code structure obtained by the method according to the fourth aspect; receiving the concatenated code structure itself; and retrieving the information bits from the received channel output by using the concatenated code structure.
  • the step of retrieving comprises decoding the inner polar codes in parallel. This allows significantly reducing the decoding latency.
  • a computer program product comprises a computer readable storage medium storing computer executable instructions which, when executed by at least one processor, cause the at least one processor to perform the steps of the method according to the fourth aspect.
  • the method according to the fourth aspect can be embodied in the form of computer instructions or codes, thereby providing flexibility in the use thereof.
  • FIG. 1 shows a block-scheme of an apparatus for obtaining a concatenated code structure in accordance with an aspect of the present disclosure
  • Fig. 2 shows a flowchart for a method of obtaining the concatenated code structure in accordance with another aspect of the present disclosure
  • Fig. 3 illustrates one embodiment involving using a predefined bit index sequence of type I in the method shown in Fig. 2;
  • Fig. 4 illustrates another embodiment involving using a predefined bit index sequence of type II in the method shown in Fig. 2;
  • Fig. 5 shows a flowchart for a method for calculating the predefined bit index sequence of type I or type II
  • Fig. 6 shows one more embodiment involving using capacities of polarized bit channels in the method shown in Fig. 2;
  • Fig. 7 shows a database comprising generator matrices for linear outer codes
  • Fig. 8 shows a flowchart for an information encoding method in accordance with one more aspect of the present disclosure
  • Fig. 9 shows a flowchart for an information decoding method in accordance with one more aspect of the present disclosure
  • Fig. 10 represents one example in which the apparatus shown in Fig. 1 is used in a communication system
  • Figs. 1 1-14 demonstrate the results of a code performance comparison between the concatenated codes constructed based on the concatenated code structures obtained by the method shown in Fig. 2 and the conventional polar codes constructed by using the prior art rate matching scheme.
  • the term“concatenated code” refers to an error-correcting code that is derived by concatenating or combining two or more simpler codes in order to achieve good performance with reasonable complexity. More specifically, the concatenated code consists of inner codes and outer codes. Furthermore, the present disclosure implies that the inner codes are represented by polar codes only, while the outer codes are represented by any type of linear block error-correcting codes (or linear outer codes for short) or non-linear outer codes (such, for example, as Goethals, Kerdock and Preparata non-linear codes, nonlinear codes with an explicitly defined list of codewords, etc.)
  • the polar code itself is one type of the linear outer codes, which allows one to“redistribute” a probability of errors among polarized bit channels representative of a physical communication channel of interest. Some bit channels have a lower probability of errors than other bit channels. The bit channels having a lower probability of errors, which are also referred to as noiseless bit channels, are then used to transmit information bits. The other bit channels are“frozen” in the sense that they are used merely to transmit frozen bits. Since both a transmitting side and a receiving side know which of the bit channels are frozen, an arbitrary value, such as a binary zero, for example, can be allocated to each of the frozen bit channels.
  • the polar code allows one to deliver desired information bits by using high- reliable bit channels, thereby minimizing the occurrence of errors.
  • the bit channels generated by using the polar code are limited to the number of 2 n .
  • different approaches have been proposed, including the ones involving using the concatenated codes.
  • the prior art solutions relating to the concatenated codes mainly rely on the linear outer codes having an outer code length similar to P.
  • the prior art solutions use the outer code length also equal to the power of two.
  • an available range of lengths N for the concatenated codes is much wider. For example, assuming that the maximal length N of the concatenated code is limited to 1024, there will be a different number of outer code lengths in the power-of-two case and an arbitrary case, namely:
  • N E T2 n
  • T E 5, 6, 7, 8 ⁇ is the arbitrary length of the linear outer code.
  • the construction of the concatenated codes is substantially down to the construction of the outer codes, taking into account that the inner codes are represented by the conventional polar codes only.
  • the construction of the outer codes is based on the selection of code rates therefor, or in other words, an allocation of information bits thereto, since the code rate and the number of information bits are mutually dependent parameters, as should be obvious to those skilled in the art.
  • the number of the information bits allocated to each outer code is also known as an outer code dimension, and the outer code dimensions for all outer codes are combined into a vector of outer code dimensions.
  • the present disclosure provides a new solution for obtaining a concatenated code structure that enables the construction of concatenated codes having a code length unnecessarily equal to the power of two, a low decoding latency and better error correction performance compared to the conventional polar codes, as well as the simplification of the code construction process itself.
  • the term“concatenated code structure” refers to a combination of parameters required to construct the concatenated code of a desired length.
  • the concatenated code structure comprises the lengths T and N, the vector of outer code dimensions, and generator matrices for the inner polar codes.
  • the concatenated code structure comprises, in addition to the parameters listed above, generator matrices for the outer codes.
  • the concatenated code structure thus defined may be used in both the encoding and decoding processes, as will be discussed later.
  • Fig. 1 shows a block-scheme of an apparatus 100 for obtaining a concatenated code structure in accordance with an aspect of the present disclosure.
  • the apparatus 100 comprises a storage 102 and at least one processor 104 coupled to the storage 102.
  • the storage 102 stores processor executable instructions 106 to be executed by the at least one processor 104 to obtain the concatenated code structure in a proper manner.
  • the storage 102 may be implemented as a nonvolatile or volatile memory used in modem electronic computing machines.
  • the nonvolatile memory may include Read- Only Memory (ROM), ferroelectric Random- Access Memory (RAM), Programmable ROM (PROM), Electrically Erasable PROM (EEPROM), solid state drive (SSD), flash memory, magnetic disk storage (such as hard drives and magnetic tapes), optical disc storage (such as CD, DVD and Blu-ray discs), etc.
  • ROM Read- Only Memory
  • RAM ferroelectric Random- Access Memory
  • PROM Programmable ROM
  • EEPROM Electrically Erasable PROM
  • SSD solid state drive
  • flash memory magnetic disk storage (such as hard drives and magnetic tapes), optical disc storage (such as CD, DVD and Blu-ray discs), etc.
  • the volatile memory examples thereof include Dynamic RAM, Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Static RAM, etc.
  • the processor 104 may be implemented as a central processing unit (CPU), general-purpose processor, single-purpose processor, microcontroller, microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), complex programmable logic device, or the like. It is worth noting that the processor 104 may be implemented as any combination of the aforesaid. As an example, the processor 104 may be a combination of two or more CPUs, general-purpose processors, etc.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the processor executable instructions 106 stored in the storage 102 may be configured as a computer executable code which causes the processor 104 to perform the aspects of the present disclosure.
  • the computer executable code for carrying out operations or steps for the aspects of the present disclosure may be written in any combination of one or more programming languages, such as Java, C, C++, Python or the like.
  • the computer executable code may be in the form of a high-level language or in a pre-compiled form, and be generated by an interpreter (also pre-stored in the storage 102) on the fly.
  • Fig. 2 shows a flowchart for a method 200 of obtaining the concatenated code structure in accordance with another aspect of the present disclosure.
  • the method 200 is intended to be performed by the processor 104 of the apparatus 100 when the processor 104 is caused with the processor executable instructions 106.
  • step S202 in which the processor 104 receives the following input data:
  • N T 2 n ⁇ N max
  • N max is a maximal length of the concatenated code which depends, for example, on a communication or storage system where the concatenated code are intended to be used, 2 n is a polar code length, and n is a positive integer;
  • step S204 the processor 104 calculates a vector of outer code dimensions based on the input data, as well as in accordance with a predefined bit index sequence corresponding to the outer code length T in one embodiment, or capacities of the polarized bit channels in another embodiment.
  • step S206 the processor 104 determines generator matrices for the inner polar codes based on the input data.
  • the method 200 ends in step S208, in which the processor 104 obtains the concatenated code structure based on the lengths T and N, the vector of outer code dimensions, and the generator matrices for the inner polar codes.
  • the method 200 comprises one additional step (not shown), in which the processor 104 also determines generator matrices for the outer codes based on the length T and the vector of outer code dimensions. The additional step may be performed directly after the step S204 or after the step S206.
  • the concatenated code structure is obtained by the processor 104 in the step S208 based on the lengths T and N, the vector of outer code dimensions, the generator matrices for the outer codes, and the generator matrices for the inner polar codes.
  • the concatenated code dimension K received in the step S202 of the method 200 may be set by taking into account Cyclic Redundancy Check (CRC) bits.
  • CRC Cyclic Redundancy Check
  • the concatenated code dimension K may indicate the number of the information bits plus the CRC bits.
  • Figs. 3 and 4 show flowcharts for calculating the vector of outer code dimensions in accordance with the predefined bit index sequence of type I and type II, respectively.
  • the processor 104 is configured to calculate the vector of outer code dimensions by performing substeps S302-S308 constituting the step S204. In particular, in substep S302, the processor 104 removes, from the predefined bit index sequence, the bit indices greater than N to obtain a reduced bit index sequence.
  • the processor 104 assigns, in substep S306, one to each vector component U j of the information bit masking vector u, whose index occurs among the first K bit indices of the reduced bit index sequence.
  • the processor 104 calculates, in last substep S308, the part Ki of the information bits intended for each outer code from the information bit masking vector as follows:
  • K 10
  • the processor 104 removes, from the predefined bit index sequence (which comprises 40 items), the bit indices greater than 20 to obtain the reduced bit index sequence.
  • the reduced bit index sequence should comprise the following items:
  • the information bit masking vector is presented as follows:
  • the processor 104 is instructed, in the substep S306, to assign one to each vector component U j , whose index occurs among the first K bit indices of the reduced bit index sequence, i.e. (20, 19, 18, 15, 10, 17, 14, 9, 16, 13). That is, the information bit masking vector is transformed into the following one:
  • the processor 104 calculates, in the substep S308, the parts for the outer codes according to said equation (1), thereby obtaining the vector of outer code dimensions in the form of [0, 2, 3, 5]
  • the processor 104 may be configured, at first, to reshape the modified vector u row-by-row into a matrix of size N /T x T and calculate the parts Ki simply by counting how many ones there are in each row of the matrix, as schematically shown below:
  • the embodiment involving using the predefined bit index sequence of type I in the step S204 may be written as the following pseudocode:
  • Pseudocode 1 Calculation of the vector of outer code dimensions based on S l . Input:
  • the predefined bit index sequence of type II has the same length N sequence — T 2 n m x ⁇ N max , but comprises bit indices represented by integer numbers from a set of (l,2 , N sequenCe /T This means that the sequence items can be repeated within the sequence, as opposed to the sequence of type I.
  • the processor 104 is configured to calculate the vector of outer code dimensions by performing substeps S402-S406 constituting the step S204. In particular, in substep S402, the processor 104 removes, from the predefined bit index sequence, the bit indices greater than N /T to obtain the reduced bit index sequence.
  • the processor 104 initializes the vector of outer code dimensions with N /T zero vector components and in substep S406, the processor 104 increases each vector component Ki by one each time when its index occurs among the first K bit indices of the reduced bit index sequence, as given by the following equation:
  • the reduced bit index sequence obtained by the processor 104 comprises the following items:
  • the processor 104 initializes the vector of outer code dimensions with N/T zero vector components K) as follows:
  • the processor 104 is instructed, in the substep S406, to add one to R j each time when the index i occurs among the first K (i.e. 10) bit indices of d educed according to said equation (2), thereby yielding the following vector of outer code dimensions:
  • the embodiment involving using the predefined bit index sequence of type II in the step S204 may be written as the following pseudocode:
  • Pseudocode 2 Calculation of the vector of outer code dimensions based on S 11 . Input:
  • both of the predefined bit index sequences of type I or type II are calculated in advance, i.e. prior to starting the method 200, either by the apparatus 100 itself, i.e. the processor 104, or by a remote device.
  • the apparatus 100 may be configured to connect, with a wireless or wired connection, to the remote device in order to download the predefined bit index sequence and store it in the storage 102 for further use. This allows reducing resource costs during the execution of the method 200.
  • the calculation of the predefined bit index sequence may be performed as discussed below with reference to Fig. 5.
  • Fig. 5 shows a method 500 for calculating the predefined bit index sequence of type I or type II.
  • the vectors of outer code dimensions are calculated by using each of the possible values K.
  • the calculated vectors of outer code dimensions are subjected to changes such that for each pair of neighboring values K and K 1, the vectors of outer code dimensions differ from each other only by one vector component.
  • the predefined bit index sequence of type I or type II is generated by using indices of such vector components.
  • the method 500 is performed differently depending on which of the predefined bit index sequences of type I and type II should be calculated.
  • the method 500 may be written as the following pseudocode for the both types of the bit index sequences:
  • Pseudocode 3 Construction of the bit index sequence.
  • K t KpreVi for all i E ⁇ A ⁇ ⁇ i * ⁇ ;
  • bit index sequence is S 1 then:
  • Another half of the bit index sequence may be calculated online as follows:
  • Fig. 6 shows a flowchart for calculating the vector of outer code dimensions by performing steps S602- S614 constituting the step S204.
  • the processor 104 estimates the capacities of the polarized bit channels provided by each inner polar code by using one of the following: the density evolution, approximation formulas or tabular data. All of these techniques are well known from the prior art (see, for example, the above-indicated work of S. ten Brink, G. Kramer, and A. Ashikhmin).
  • the processor 104 calculates, in step S604, the part K t of the information bits intended for each outer code as follows:
  • Ki round(K i ' )
  • the processor 104 is instructed, in step S606, to obtain a sum of all parts K ⁇ of the information bits intended for the outer codes, and, in step S608, to determine whether the sum is equal to K. If the determination result is“Yes”, then the processor 104 calculates the vector of outer code dimensions by using the parts K t in step S610. In the meantime, if the determination result is“No”, the processor 104 proceeds to further processing of the parts K ⁇ . In particular, if it is determined that the sum is less than K, the processor 104 finds, in step S612, the index i such that a difference (/f ( — is maximal, and adds one to the respective part Ki in step S614.
  • the processor 104 finds, in the step S612, the index i such that a difference (/f,— K ) is minimal, and subtract one from the respective part /f, in the step S614. Irrespective of whether the sum of the parts K t is less or more than K, the processor 104 should ensure that the respective part
  • the processor 104 returns to the step S606 in order to obtain the sum of all parts K t again, and then to step S608 in order to recheck whether the sum is equal to K. If there is still“No”, the processor 104 repeats the steps S612 and S614. In other words, the steps S606, S608, S612 and S614 are performed repeatedly until the processor 104 obtains the determination result “Yes” in the step S608.
  • Pseudocode 4 Calculation of the vector of outer code dimensions based on channel capacities.
  • i * argmin ⁇ K,— K ;
  • Pseudocode 4 may be also used in the step S504 of the method 500 to calculate the vectors of outer code dimensions for all possible values K.
  • the generator matrices for the linear outer codes may be determined as follows.
  • the storage 102 may further comprise a database comprising the generator matrices corresponding to a wide range of values for the outer code length T, and the processor 104 may be further configured to access the database and find those of the generator matrices which correspond to the given values T and K t .
  • such a database may be stored in a remote device, such as a remote server, and the processor 104 may be configured to communicate with the remote device to retrieve the respective generator matrices.
  • the columns of the database are represented by different generator matrices G T (Ki), while the rows of the database are represented by different values K t .
  • the structure of the database shown in Fig. 7 is illustrative and may be replaced with any other structure depending on particular appalications.
  • the concatenated code structure i.e. the lengths N and T, the vector of outer code dimensions, the generator matrices G T (K[) (in case of the linear outer codes), and the generator matrices for the inner polar codes, one may easily construct the concatenated code itself.
  • Fig. 8 shows a flowchart for an information encoding method 800 in accordance with one other aspect of the present disclosure.
  • the method 800 comprises steps S802-S806 and is intended to be performed by a concatenated encoder. Similar to the apparatus 100, the concatenated encoder may be implemented as the combination of a memory storing computer-executable instructions and at least one processor executing the computer- executable instructions to perform the method 800.
  • the method 800 starts with the step S802 consisting in receiving a vector of K information bits.
  • the difference between the concatenated code dimension K and the vector of K information bits is that the former is just indicative of a number of information bits to be encoded, while the latter is indicative of a certain arrangement of the K information bits.
  • the method 800 proceeds to the step S804 consisting in receiving the concatenated code structure obtained in the method 200.
  • the last step S806 of the method 800 consists in encoding the vector of K information bits by using the concatenated code structure.
  • the step S806 of the method 800 will be now described in more detail.
  • the concatenated code structure comprises the following parameters: the lengths T and N, the vector of outer code dimensions, the generator matrices G T (Ki ) for the outer codes (if the outer codes are linear), and the generator matrices for the inner polar codes.
  • the vector v(K) is divided into N /T sub- vectors each having a length equal to corresponding K L .
  • Each of the sub-vectors v(K L ) is intended for each of the N /T outer codes.
  • the sub vectors v(Ki) are encoded with the outer codes, thereby obtaining an outer-code matrix in which each row comprises outer-code codewords where 1 ⁇ t ⁇
  • the outer-code codewords are obtained by respectively applying the generator matrices G T (K ) to the sub-vectors v(K ) as follows:
  • the respective outer-code codeword (c ; i , ... , q 7 ) is an all-zero codeword, as noted earlier.
  • each j-th column of the outer-code matrix (q ) is encoded by using respective one of the inner polar codes to obtain a concatenated-code codeword in the form of a resulting matrix (d i ; ).
  • the polar encoding itself is a well-known process, for which reason its details are omitted herein.
  • the initial vector of K information bits is encoded into the resulting matrix (d i ⁇ ) representing the concatenated-code codeword.
  • Fig. 9 shows a flowchart for an information decoding method 900 in accordance with one more aspect of the present disclosure.
  • the method 900 comprises steps S902-S906 and is intended to be performed by a concatenated decoder. Similar to the apparatus 100, the concatenated decoder may be implemented as the combination of a memory storing computer-executable instructions and at least one processor executing the computer- executable instructions to perform the method 900.
  • the method 900 starts with the step S902 consisting in receiving a channel output comprising the information bits encoded into the concatenated-code codeword by using the method 800.
  • the channel output represents a signal which the concatenated decoder receives from outside, for example, from a communication channel, and therefore comprises different noises in addition to the encoded information bits.
  • the method 900 proceeds to the step S904 consisting in receiving the concatenated code structure itself.
  • the last step S906 of the method 900 consists in retrieving the information bits from the received channel output by using the concatenated code structure. The details of such retrieval can be found, for example, in the following paper: H. Saber and I. Marsland, “Design of Generalized
  • Fig. 10 shows one example in which the apparatus 100 is used in a communication system 1000.
  • the communication system 1000 comprises a transmitting side and a receiving side.
  • the transmitting side comprises the apparatus 100 and a concatenated encoder 1002 comprising an outer encoder 1004 and an inner encoder 1006.
  • the receiving side is connected to the transmitting side via a communication channel 1008, and comprises the apparatus 100 and a concatenated decoder 1010 comprising an inner decoder 1012 and an outer decoder 1014.
  • the operation principle of the communication system 1000 is described below.
  • the concatenated encoder 1002 receives the vector of K information bits, i.e. u(/f), and the concatenated code structure from the apparatus 100. By using the concatenated code structure, the concatenated encoder 1002 performs the above-described method 800 with regard to the information bits of the vector v(K).
  • the outer-code codewords constituting the matrix are generated by the outer encoder 1004 of the concatenated encoder 1002, while the concatenated-code codeword in the form of the resulting matrix ⁇ d-i j ) generated by the inner encoder 1006 of the concatenated encoder 1002.
  • the concatenated encoder 1002 provides the concatenated-code codeword to the communication channel 1008.
  • the concatenated-code codeword should be properly modulated onto a carrier wave prior to entering the communication channel 1008.
  • any suitable well-known modulation schemes may be used, and all of them are intended to be within the scope of the present disclosure.
  • the modulated carrier wave is subjected to different noises when propagating over the communication channel 1008, for which reason a channel output comprises the combination of the modulated carrier wave and the noises, as discussed earlier.
  • the channel output should first be demodulated by a suitable demodulation scheme, as should be again obvious to those skilled in the art.
  • the demodulated channel output is provided to the concatenated decoder 1010, which also receives the same concatenated code structure from the apparatus 100.
  • the concatenated decoder 1010 performs the above-described method 900 with regard to the demodulated channel output.
  • the inner decoder 1012 and the outer decoder 1014 operate jointly to retrieve the information bits from the demodulated channel output.
  • Such joint operation is schematically shown as a double-headed arrow in Fig. 10.
  • the concatenated decoder 1010 may be implemented as any combination of the SC/SCL/CRC-aided SCL decoder as the inner decoder 1012 and the ML decoder as the outer decoder 1014.
  • One other embodiment is possible, in which the inner decoder 1012 decodes the T inner polar codes in parallel.
  • Figs. 1 1-14 illustrate the results of a code performance comparison between the concatenated codes constructed based on the concatenated code structure obtained by the method 200 (hereinafter referred to as the concatenated code for short), and the conventional polar codes constructed by using the rate matching scheme disclosed in 3GPP TS 38.212“Multiplexing and channel coding,” Release 15, 2017.
  • the results have been obtained by using an Additive White Gaussian Noise (AWGN) channel as the communication channel 1008, a Quadrature Phase Shift Keying (QPSK) modulation scheme, and the above-indicated combination of the the CRC-aided SCL decoder and the ML decoder.
  • AWGN Additive White Gaussian Noise
  • QPSK Quadrature Phase Shift Keying
  • the following parameters of the SCA-SCL decoder has been used: the list size equal to 8, and the CRC set to 19.
  • the list size equal to 8
  • the CRC set to 19.
  • such values of the parameters do not limit anyhow the possibility of using the present disclosure, and any other values of the parameters may be used depending on particular applications, as should be apparent to those skilled in the art.
  • SNR signal-to-noise ratio
  • FER Frame Error Rate
  • the gain is calculated as a difference in dB between the SNRs obtained by using the conventional polar code and the concatenated code for each K.
  • the insert shown on the right side of the both dependences demonstrates a distribution of gain values over all values K.
  • the both dependences and the insert are obtained taking into account the bit index sequence of type I or type II (i.e. Pseudocode 1 or 2) in the step S204 of the method 200.
  • the SNR-on- if dependence for the concatenated code is smoother than that for the conventional polar codes (see the dashed curve). This means that the concatenated code provides a gradual change in the code performance, i.e. the SNR, when changing K.
  • Fig. 12 may be considered as being composed by combining multiple distributions of gain values (each for different N ) which are similar to that shown in the insert in Fig. 1 1.
  • Fig. 12 is obtained by using Pseudocode 1 or 2 in the step S204 of the method 200.
  • N such, for example, as 160, 192, 640, and 768
  • the gain values change over a relatively wide range of dB, thereby meaning that the difference between the SNRs obtained by using the concatenated code and the conventional polar code is significant at such values N.
  • the concatenated code provides better code performance compared to the conventional polar code.
  • Fig. 13 shows the dependences of the SNR (the upper one) and gain (the bottom one) on different values K, taking into account the same parameters T, N, and FER as discussed above with reference to Fig. 11. However, the dependences and the insert alongside thereof are now calculated taking into account the capacities of polarized bit channels (i.e. Pseudocode 4).
  • Fig. 14 is also obtained by using Pseudocode 4 in the step S204 of the method 200.
  • N such, for example, as 160, 192, 640, and 768
  • the gain values change over a relatively wide range of dB, thereby meaning that the difference between the SNRs obtained by using the concatenated code and the conventional polar code is significant at such values N.
  • Pseudocode 1 or 2 and Pseudocode 4 are similar in nature, i.e. they are all monotonic dependences, without any abrupt jumps.
  • Pseudocode 1 or 2 and Pseudocode 4 may be equally used when obtaining the concatenated code structure used further to construct the concatenated code of the desired length N.
  • Pseudocode 4 may be more preferable when it is required to employ less memory resources, while accepting more calculations.
  • Pseudocode 1 or 2 may be used when it is necessary to provide less computational complexity, while accepting the use of more memory resources.
  • each block or step of the methods described herein, or any combinations of the blocks or steps can be implemented by various means, such as hardware, firmware, and/or software.
  • one or more of the blocks or steps described above can be embodied by computer executable instructions, data structures, program modules, and other suitable data representations.
  • the computer executable instructions which embody the blocks or steps described above can be stored on a corresponding data carrier and executed by at least one processor like the processor 104 of the apparatus 100.
  • This data carrier can be implemented as any computer-readable storage medium configured to be readable by said at least one processor to execute the computer executable instructions.
  • Such computer-readable storage media can include both volatile and nonvolatile media, removable and non-removable media.
  • the computer-readable media comprise media implemented in any method or technology suitable for storing information.
  • the practical examples of the computer-readable media include, but are not limited to information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic tape, magnetic cassettes, magnetic disk storage, and other magnetic storage devices.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente invention concerne d'une manière générale des techniques de codage et de décodage de données, et en particulier un appareil et un procédé d'obtention d'une structure de code concaténé, ainsi qu'un produit programme informatique correspondant. L'appareil et le procédé proposés permettent la construction de codes concaténés caractérisés par une meilleure adaptation de longueur de code et une réduction de latence de décodage, ainsi qu'une faible complexité, tout en conservant des performances de correction d'erreur similaires ou encore meilleures que celles des codes concaténés classiques basés uniquement sur des codes polaires ou des codes externes linéaires ayant une longueur de code externe égale à une puissance de deux.
PCT/RU2018/000819 2018-12-13 2018-12-13 Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé WO2020122749A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880100216.1A CN113196671B (zh) 2018-12-13 2018-12-13 用于获得级联码结构的装置和方法及其计算机程序产品
PCT/RU2018/000819 WO2020122749A1 (fr) 2018-12-13 2018-12-13 Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/RU2018/000819 WO2020122749A1 (fr) 2018-12-13 2018-12-13 Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé

Publications (1)

Publication Number Publication Date
WO2020122749A1 true WO2020122749A1 (fr) 2020-06-18

Family

ID=65278441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2018/000819 WO2020122749A1 (fr) 2018-12-13 2018-12-13 Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé

Country Status (2)

Country Link
CN (1) CN113196671B (fr)
WO (1) WO2020122749A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11677500B2 (en) * 2020-09-30 2023-06-13 Polaran Haberlesme Teknolojileri Anonim Sirketi Methods and apparatus for encoding and decoding of data using concatenated polarization adjusted convolutional codes

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140208183A1 (en) * 2013-01-23 2014-07-24 Samsung Electronics Co., Ltd. Method and system for encoding and decoding data using concatenated polar codes
US20180026663A1 (en) * 2016-07-19 2018-01-25 Mediatek Inc. Low complexity rate matching for polar codes
EP3407519A1 (fr) * 2016-08-11 2018-11-28 Huawei Technologies Co., Ltd. Procédé, dispositif et équipement à utiliser dans un codage de polarisation

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106452460B (zh) * 2016-09-21 2018-02-02 华中科技大学 一种极化码与重复码级联的纠错编码方法
CN105811998B (zh) * 2016-03-04 2019-01-18 深圳大学 一种基于密度演进的极化码构造方法及极化码编译码系统
CN106888025B (zh) * 2017-01-19 2018-03-20 华中科技大学 一种基于极化码的级联纠错编译码方法和系统
CN108574561B (zh) * 2017-03-14 2020-11-17 华为技术有限公司 极化码编码的方法和装置
CN107017892B (zh) * 2017-04-06 2019-06-11 华中科技大学 一种校验级联极化码编码方法及系统
CN107395324B (zh) * 2017-07-10 2020-04-14 北京理工大学 一种基于qup方法的低译码复杂度速率匹配极化码传输方法
CN108023679B (zh) * 2017-12-07 2020-06-16 中国电子科技集团公司第五十四研究所 基于并行级联系统极化码的迭代译码缩放因子优化方法
CN108055044A (zh) * 2018-01-19 2018-05-18 中国计量大学 一种基于ldpc码和极化码的级联系统
CN108462560A (zh) * 2018-03-26 2018-08-28 西安电子科技大学 一种用于极化码级联的编译码方法
CN108847850A (zh) * 2018-06-13 2018-11-20 电子科技大学 一种基于crc-sscl的分段极化码编译码方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140208183A1 (en) * 2013-01-23 2014-07-24 Samsung Electronics Co., Ltd. Method and system for encoding and decoding data using concatenated polar codes
US20180026663A1 (en) * 2016-07-19 2018-01-25 Mediatek Inc. Low complexity rate matching for polar codes
EP3407519A1 (fr) * 2016-08-11 2018-11-28 Huawei Technologies Co., Ltd. Procédé, dispositif et équipement à utiliser dans un codage de polarisation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Design of Low-Density Parity-Check Codes for Modulation and Detection", IEEE TRANSACTIONS ON COMMUNICATIONS, vol. 52, no. 4, April 2004 (2004-04-01), pages 670 - 678
H. SABER; I. MARSLAND: "Design of Generalized Concatenated Codes Based on Polar Codes With Very Short Outer Codes", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY,, vol. 66, no. 4, April 2017 (2017-04-01), pages 3103 - 3115, XP011645880, DOI: doi:10.1109/TVT.2016.2591584

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11677500B2 (en) * 2020-09-30 2023-06-13 Polaran Haberlesme Teknolojileri Anonim Sirketi Methods and apparatus for encoding and decoding of data using concatenated polarization adjusted convolutional codes

Also Published As

Publication number Publication date
CN113196671A (zh) 2021-07-30
CN113196671B (zh) 2023-10-13

Similar Documents

Publication Publication Date Title
US20180248567A1 (en) Method for error-correction coding
JP5705106B2 (ja) ユークリッド空間リード−マラー符号の軟判定復号を実行する方法
TWI768295B (zh) 用於音訊/視訊樣本向量之錐型向量量化檢索/解檢索之方法及裝置
KR101856416B1 (ko) 극 부호를 위한 저 복잡도 scl 복호 방법 및 장치
US9059746B2 (en) Data sharing method, transmitter, receiver and data sharing system
US11990921B2 (en) List decoding of polarization-adjusted convolutional codes
CN109075805B (zh) 实现极化码的设备和方法
Doan et al. Neural dynamic successive cancellation flip decoding of polar codes
WO2018030910A1 (fr) Codage et décodage de codes polaires étendus à des longueurs qui ne sont pas des puissances de deux
WO2020122749A1 (fr) Appareil et procédé d'obtention de structures de code concaténé et produit programme informatique associé
CN110768748B (zh) 回旋码解码器及回旋码解码方法
WO2020122748A1 (fr) Appareil et procédé pour obtenir des structures de code concaténé et produit programme informatique associé
CN114499544A (zh) 一种极化码的译码方法
JP2004201323A (ja) 複雑度を減らしたコードテーブルを使用する復調装置及びその方法
CN115529104B (zh) 基于最大互信息的极化码量化译码方法及装置
CN114499548B (zh) 一种译码方法、装置及存储介质
CN116783826A (zh) 基于自同构的极化编码和解码
US11695430B1 (en) Method for decoding polar codes and apparatus thereof
US12047094B2 (en) Decoding method and decoding device
JP5132738B2 (ja) 誤り訂正復号器及び受信機
US12119847B2 (en) Noniterative entropy coding
CN110768747B (zh) 回旋码解码器及回旋码解码方法
CN116015313A (zh) 一种编解码方法、装置、设备及存储介质
JP3345698B2 (ja) 誤り訂正復号化回路
CN105474548B (zh) 一种信道编码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18842858

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18842858

Country of ref document: EP

Kind code of ref document: A1