WO2020052537A1 - 极化码的译码方法及设备 - Google Patents

极化码的译码方法及设备 Download PDF

Info

Publication number
WO2020052537A1
WO2020052537A1 PCT/CN2019/105033 CN2019105033W WO2020052537A1 WO 2020052537 A1 WO2020052537 A1 WO 2020052537A1 CN 2019105033 W CN2019105033 W CN 2019105033W WO 2020052537 A1 WO2020052537 A1 WO 2020052537A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoding
layer
node
preset
sequence
Prior art date
Application number
PCT/CN2019/105033
Other languages
English (en)
French (fr)
Inventor
童佳杰
邱鹏程
刘小成
张其蕃
王俊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020052537A1 publication Critical patent/WO2020052537A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes

Definitions

  • the present application relates to the field of communications, and in particular, to a method and a device for decoding a polar code.
  • Polar code decoding usually uses sequential elimination list (Successive Cancellation List, SCL) decoding.
  • SCL sequential elimination list
  • L decoding results will be generated at each decoding stage, and associated pointers between the decoding stages will be generated.
  • the decoding result of the L copies and the decoding pointer of the L copies are stored. After the decoding is completed, the decoding result is restored one by one from the back through the last decoding pointer as an entry.
  • the embodiments of the present application provide a method and a device for decoding a polar code to save storage space during the decoding process.
  • an embodiment of the present application provides a method for decoding a polar code, including:
  • the receiving device receives a log-likelihood ratio LLR sequence corresponding to the sequence to be decoded
  • the receiving device obtains a part of a preset node and a Psum in each decoding layer according to the LLR sequence, wherein the number of layers in the decoding layer is log 2 N, and N is a sequence in the sequence to be decoded.
  • Psum is the intermediate result used in the G operation in Polar code decoding.
  • U is the decision result of the LLR of the end node
  • G N is the decoding matrix, and the positions of all preset nodes cover the position of the entire decoding bit;
  • the receiving device obtains the decoded sequence according to the Psum of the preset node in each of the decoding layers and the decoding matrix corresponding to each of the decoding layers, and can recover the Psum of the preset node through the Psum of the preset node.
  • the decoded sequence does not need to store decoding pointers, which saves storage space.
  • the receiving device obtains a decoded sequence according to a Psum of a preset node in each decoding layer and a decoding matrix corresponding to each decoding layer, including:
  • the receiving device obtains a decoding result corresponding to each decoding layer according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers;
  • the decoding matrix corresponding to the decoding layer of the M layer is a K ⁇ K matrix
  • the number of preset nodes in the decoding layer of the Mth layer is N / 2 M
  • the 1 ⁇ M ⁇ log 2 N
  • the K N / 2 M , where M is an integer
  • pass The decoding result corresponding to each decoding layer can be obtained, where G N is a K ⁇ K matrix, and Psum is a K ⁇ 1 row vector;
  • the receiving device obtains a decoded sequence according to a decoding result corresponding to each of the decoding layers, and the decoding length corresponding to the decoding result of the Mth layer is N / 2 M , that is, each decoding is obtained After the decoding result of the layer, the decoded sequence is obtained by splicing.
  • a node position of a preset node in any coding layer in the any coding layer and a node of a preset node in other coding layers in the other coding layer The location is different.
  • the receiving device obtains a decoded sequence according to a decoding result corresponding to each of the decoding layers, including:
  • the receiving device determines a decoding position of a decoding result corresponding to each decoding layer in the decoded sequence according to a node position where a preset node in each decoding layer is located, where , The preset nodes in each of the decoding layers are continuously set, and the decoding positions correspond to the positions of the continuously set preset nodes in the decoding layer;
  • the receiving device obtains a decoded sequence according to a decoding result and a decoding position corresponding to each of the decoding layers. That is, for each decoding layer, the decoding position of the decoding result corresponding to each decoding layer in the decoded sequence is the same as the position of the continuously set preset nodes in the decoding layer.
  • the number of path expansions is generally limited by searching the width L.
  • the search width L is the maximum number of paths reserved for path expansion.
  • a path metric (Path Metric, PM) can be used to determine the retained or deleted path.
  • the receiving device before the receiving device obtains a decoded sequence according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers, the receiving device further includes: :
  • the receiving device After the receiving device obtains the decision result of each end node of the log 2 N layer decoding layer, it updates the CRC operation result according to the decision result of the end node, until the CRC is updated according to the decision result of the last end node.
  • the operation result is the updated CRC operation result.
  • the receiving device obtains a verification result that passes the verification according to the updated CRC operation result.
  • the decision result of the end node and the LLR can be discarded without further storage, that is, this implementation
  • the check sequence that passes the check is used as the decoded sequence, but during the decision process, the check is performed bit by bit, and the last one is obtained.
  • the final decoded sequence is restored only after the decision bit of the destination node passes the check.
  • a decoded sequence is restored. Compared with the prior art, which needs to restore and store multiple check sequences, the storage space is greatly saved.
  • the receiving device obtains, according to the LLR sequence, a part and a Psum of a preset node in each decoding layer, including:
  • the receiving device sequentially recursively obtains the Psum of a preset node in each decoding layer from the log 2 N layer to the first layer.
  • an embodiment of the present application provides a receiving device.
  • the receiving device may be a terminal device or a network device.
  • the receiving device includes:
  • a receiving module for receiving a log-likelihood ratio LLR sequence corresponding to a sequence to be decoded
  • a processing module configured to obtain a part of a preset node and a Psum in each decoding layer according to the LLR sequence, wherein the number of layers of the decoding layer is log 2 N, and N is a sequence to be decoded The number of bits in the N, the N is an integer;
  • the processing module is further configured to obtain a decoded sequence according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers.
  • the processing module is further specifically configured to:
  • the decoding matrix corresponding to the layer is a K ⁇ K matrix.
  • a decoded sequence is obtained according to a decoding result corresponding to each of the decoding layers, where the decoding length corresponding to the decoding result of the Mth layer is N / 2 M.
  • a node position of a preset node in any coding layer in the any coding layer and a node of a preset node in other coding layers in the other coding layer The location is different.
  • the processing module is further specifically configured to:
  • a decoded sequence is obtained according to a decoding result and a decoding position corresponding to each of the decoding layers.
  • the processing module is further configured to: after obtaining a decoding according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers Before the sequence, after obtaining the decision result of each end node of the log 2 N layer decoding layer, the CRC operation result is updated according to the decision result of the end node, until the CRC operation is updated according to the decision result of the last end node. As a result, the updated CRC operation result is obtained;
  • the processing module is specifically configured to:
  • the Psum of the preset node in each decoding layer is sequentially recursively obtained from the log 2 N layer to the first layer.
  • an embodiment of the present application provides a receiving device, including: a memory, a processor, and a computer program.
  • the computer program is stored in the memory, and the processor runs the computer program to execute the first aspect or the foregoing.
  • an embodiment of the present application provides a storage medium, where the storage medium includes a computer program, and the computer program is configured to implement the method according to the first aspect or various possible designs of the first aspect.
  • an embodiment of the present application provides a computer program product, where the computer program product includes computer program code, and when the computer program code is run on a computer, the computer is caused to execute the first aspect or various aspects of the first aspect. Possible design described method.
  • an embodiment of the present application provides a chip including a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the processing
  • the processor executes the method described in the first aspect or the various possible designs of the first aspect.
  • a receiving device receives a log-likelihood ratio LLR sequence corresponding to a sequence to be decoded, and the receiving device obtains a preset node in each decoding layer according to the LLR sequence. Partial and Psum, the receiving device obtains the decoded sequence according to the Psum of the preset node in each decoding layer and the decoding matrix corresponding to each decoding layer.
  • the Psum of the preset node can be used to recover After decoding the sequence, it is not necessary to store the decoding pointer, which saves storage space.
  • FIG. 1 is a schematic diagram of a system architecture of a sending device and a receiving device provided in this application;
  • FIG. 2 is a signaling flowchart of a method for decoding a polar code according to an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a decoding diagram according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for decoding a polar code according to an embodiment of the present application
  • FIG. 5 is a first schematic diagram of a decoding process according to an embodiment of the present application.
  • FIG. 6 is a second schematic diagram of a decoding process according to an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a receiving device according to an embodiment of the present application.
  • FIG. 8 is a hardware schematic diagram of a receiving device according to an embodiment of the present application.
  • FIG. 9 is a hardware schematic diagram of a terminal device or a network device provided in an embodiment of the application.
  • the network architecture and service scenarios described in the embodiments of the present invention are used to more clearly illustrate the technical solutions of the embodiments of the present invention, and do not constitute a limitation on the technical solutions provided by the embodiments of the present invention. Those of ordinary skill in the art will know that The evolution of the architecture and the emergence of new service scenarios. The technical solutions provided by the embodiments of the present invention are also applicable to similar technical problems.
  • the embodiments of the present application can be applied to wireless communication systems.
  • the wireless communication systems mentioned in the embodiments of the present application include, but are not limited to: Narrowband Internet of Things (NB-IoT), Global Mobile Communication system (Global System for Mobile, Communications, GSM), Enhanced Data Rate GSM Evolution System (Enhanced Data Rate for GSM Evolution, EDGE), Wideband Code Division Multiple Access System (Wideband Code Division Multiple Access, WCDMA), Code Division Multiple Access 2000 System (Code Division Multiple Access) (CDMA2000), Time Division-Synchronization Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE) and next-generation 5G mobile communication systems
  • NB-IoT Narrowband Internet of Things
  • GSM Global Mobile Communication system
  • EDGE Enhanced Data Rate GSM Evolution System
  • WCDMA Wideband Code Division Multiple Access
  • CDMA2000 Code Division Multiple Access 2000 System
  • TD-SCDMA Time Division-Synchronization Code Division Multiple Access
  • LTE Long Term Evolution
  • the communication device involved in this application mainly includes a network device or a terminal device.
  • the sending device in this application is a network device, and the receiving device is a terminal device; the sending device in this application is a terminal device, and the receiving device is a network device.
  • the terminal device includes, but is not limited to, a mobile station (MS, Mobile Station), a mobile terminal (Mobile terminal), a mobile phone (Mobile phone), a mobile phone (handset), and a portable device (portable equipment) ), Etc.
  • the terminal device may communicate with one or more core networks via a radio access network (RAN, Radio Access Network).
  • RAN Radio Access Network
  • the terminal device may be a mobile phone (or a "cellular" phone), which has wireless communication
  • the terminal device may also be a portable, pocket-sized, handheld, built-in computer or vehicle-mounted mobile device or device.
  • the network device may be a device for communicating with a terminal device.
  • the network device may be a base station (Base Transceiver Station, BTS) in the GSM system or CDMA, or a base station (NodeB) in the WCDMA system.
  • BTS Base Transceiver Station
  • NodeB base station
  • NB may also be an evolutionary base station (Evolutionary Node B, eNB or eNodeB) in the LTE system, or the network device may be a relay station, an access point, an in-vehicle device, a wearable device, and a network side in a future 5G network Equipment or network equipment in a future evolved public land mobile network (Public Land Mobile Network, PLMN).
  • PLMN Public Land Mobile Network
  • the communication system of the present application may include a transmitting device and a receiving device.
  • FIG. 1 is a schematic diagram of a system architecture of a transmitting device and a receiving device provided in the present application.
  • the transmitting device is an encoding end and can be used for polar Encode and output the encoded sequence, the encoded sequence is transmitted to the decoding side on the channel;
  • the receiving device is the decoding end, which can be used to receive the decoded sequence (that is, the encoded sequence) sent by the sending device, and
  • the code sequence is decoded.
  • a network device is used as an encoding terminal, and a terminal device is used as a decoding terminal.
  • the implementation of the encoding device is a terminal device and the decoding device is a network device is similar. I will not repeat them here.
  • the Polar code is a linear block code
  • the generation matrix is G N
  • u N (u 1 , u 2 , ..., u N ) is a binary A row vector of length N (that is, the length of the mother code)
  • G N is an N ⁇ N matrix
  • Matrix here It is defined as the Kronecker product of log 2 N matrices F 2 ; the addition and multiplication operations mentioned above are all addition and multiplication operations on a binary Galois field.
  • some bits in u are used to carry information, called information bits, and the index set of these bits is referred to as A; the other part of the bits is set to a fixed value agreed in advance by the transceiver, which is called freezing bits (bits fixed), its complement a C a by the set index representation.
  • freezing bits bits fixed
  • the Polar code is decoded based on a serial cancellation (SC) decoding algorithm or a serial cancellation list (SC) decoding algorithm.
  • SC serial cancellation
  • SC serial cancellation list
  • the serial cancellation list decoding algorithm is an improvement on the SC decoding algorithm. Multiple candidate decoding paths are reserved for each bit. After decoding all the bits, all candidate decoding paths in the list are selected according to certain criteria. The final decoding result.
  • the SCL decoder generates L sets of decoding results at each decoding stage, and generates an associated pointer between each decoding stage. After the decoding is completed, the decoding result is restored one by one from the back through the last decoding pointer as an entry.
  • the disadvantage of this is that it needs to store the decoding result of L shares and the decoding pointer of L shares. If the length of the mother code is N, the storage space required is N * L + N * L * log2 (L).
  • an embodiment of the present application provides a decoding method for a polar code.
  • the technical solution of the present application will be described in detail in the following specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
  • FIG. 2 is a signaling flowchart of a method for decoding a polar code according to an embodiment of the present application. As shown in Figure 2, the method includes:
  • the sending device sends a sequence to be decoded to the receiving device.
  • the receiving device receives a log-likelihood ratio LLR sequence corresponding to the sequence to be decoded.
  • the transmitting device encodes the information bits and the frozen bits, and obtains an encoded sequence after encoding, where the encoded sequence is a binary sequence.
  • the encoding structure used for encoding the information bits and the frozen bits is not particularly limited.
  • the transmitting device sends the encoded sequence to the receiving device through the channel. After the encoded sequence is transmitted through a channel, it is transformed into a Log Likelihood Ratio (LLR) sequence. Specifically, if the sending device sends bit 1 or bit 0, the receiving device may misjudge. After receiving the signal, the ratio of the probability that the receiving device correctly judges 0 to the probability that it correctly judges 1 is the likelihood ratio. A natural logarithm is the log-likelihood ratio.
  • LLR Log Likelihood Ratio
  • the receiving device obtains a partial sum of the preset nodes in each decoding layer according to the LLR sequence (Partial Sum, Psum, where the number of layers in the decoding layer is log 2 N, and N is a bit in the sequence to be decoded).
  • the number, N is an integer power of two.
  • the receiving device After receiving the LLR sequence, the receiving device performs an F operation and a G operation according to the LLR sequence to obtain a Psum of a preset node in each decoding layer.
  • the decoding layer is a decoding layer in a decoding diagram proposed for SCL decoding.
  • the number of layers of the decoding layer in this decoding graph is log 2 N, N is the number of bits in the sequence to be decoded, and N is an integer power of two. Those skilled in the art can understand that the N is the mother code length.
  • Psum is an intermediate result that must be used in the G operation during the decoding of Polar codes. Its value is Where u is the decoding result, and G N is the above-mentioned generation matrix.
  • G N can also be called a decoding matrix on the decoding side.
  • the value of Psum needs to exist in the decoder of the Polar code during the decoding process. Specific implementations of the F operation and the G operation will be described in detail in subsequent embodiments.
  • the total number of preset nodes is equal to the number of bits in the sequence to be decoded, and the node position of the preset node in any decoding layer in any decoding layer is the same as the preset node in other decoding layers. Node positions in other decoding layers are different. Therefore, the positions of all preset nodes cover the positions where the entire decoded bits are located.
  • the number of preset nodes in the M- th decoding layer is N / 2 M , 1 ⁇ M ⁇ log 2 N, and M is an integer;
  • the number of preset nodes in the decoding layers of the first layer, the second layer, the third layer, and the fourth layer is sequentially 8, 4, 2, and 1.
  • the positions of the preset nodes continuously set in the first layer decoding layer are the first to eighth bits;
  • the positions of the preset nodes continuously set in the second layer decoding layer are the 9th to 12th positions;
  • the positions of the preset nodes continuously set in the third layer decoding layer are the 13th to the 14th.
  • the position of the preset node in the layer 4 decoding layer is the 15th bit.
  • the Psum of the preset node in each decoding layer can be obtained, that is, the Psum of the gray node shown in FIG. 3 is obtained.
  • the receiving device obtains a decoded sequence according to a Psum of a preset node in each decoding layer and a decoding matrix corresponding to each decoding layer.
  • the receiving device After receiving the Psum of the preset node in each decoding layer, the receiving device obtains the decoded sequence according to the PSUM of the preset node in each decoding layer and the decoding matrix corresponding to each decoding layer.
  • the receiving device obtains a decoding result corresponding to each decoding layer according to a Psum of a preset node in each decoding layer and a decoding matrix corresponding to each decoding layer, and the M layer decoding layer corresponds to
  • the decoding matrix is K ⁇ K matrix
  • the number of preset nodes in the M-th decoding layer is N / 2 M
  • the receiving device obtains the decoded sequence according to the decoding result corresponding to each decoding layer.
  • the decoded sequence is a sequence obtained by splicing multiple decoding results.
  • the receiving device determines the decoding position of the decoding result corresponding to each decoding layer in the decoded sequence according to the node position where the preset node in each decoding layer is located.
  • the preset nodes in the code layer are continuously set, and the decoding positions correspond to the positions of the continuously set preset nodes in the decoding layer; the receiving device obtains the translation according to the decoding results and decoding positions corresponding to each decoding layer.
  • Post-code sequence That is, for each decoding layer, the decoding position of the decoding result corresponding to each decoding layer in the decoded sequence is the same as the position of the continuously set preset nodes in the decoding layer.
  • the Psum of the 8 preset nodes is a row vector of length [P1, P2, P3, P4, P5, P6, P7, P8] , Called the first layer row vector, and multiplying the first layer row vector by the 8 ⁇ 8 G N matrix, you can get the decoding results of the first to eighth bits, that is [u1, u2, u3, u4, u5, u6, u7, u8].
  • the PUSM of the 4 preset nodes is a row vector of length 4 [P9, P10, P11, P12], which is called the second layer row vector.
  • the row vector is multiplied by a 4 ⁇ 4 G N matrix, and the decoding results of the 9th to 12th bits can be obtained, that is, [u9, u10, u11, u12].
  • the PUSM of the two preset nodes is a row vector of length 2 [P13, P14], which is called the third layer row vector.
  • the third layer row vector Multiply the third layer row vector by For a 2 ⁇ 2 G N matrix, the decoding results of the 13th to 14th bits can be obtained, that is, [u13, u14].
  • the PUSM of this one preset node is a row vector of length 1 [P15], which is called the fourth layer row vector. Multiply the fourth layer row vector by 1 ⁇ The G N matrix of 1 can obtain the decoding result of the 15th bit, which is [u15].
  • the 8 ⁇ 8 G N matrix, 4 ⁇ 4 G N matrix, 2 ⁇ 2 G N matrix, and 1 ⁇ 1 G N matrix are the decoding matrices corresponding to each layer.
  • a receiving device receives a log-likelihood ratio LLR sequence corresponding to a sequence to be decoded, and the receiving device obtains a partial sum of preset nodes in each decoding layer according to the LLR sequence. Psum.
  • the receiving device obtains the decoded sequence according to the Psum of the preset node in each decoding layer and the decoding matrix corresponding to each decoding layer.
  • the Psum of the preset node can restore the translation
  • the coded sequence does not need to store the decoding pointer, which saves storage space.
  • N 16 is taken as an example for description, and the same is applied to the case where N is another value, which is not described herein again in this embodiment.
  • FIG. 4 is a schematic flowchart of a method for decoding a polar code according to an embodiment of the present application.
  • FIG. 5 is a first schematic diagram of a decoding process provided by an embodiment of the present application
  • FIG. 6 is a second schematic diagram of a decoding process provided by an embodiment of the present application.
  • the receiving device first performs an F operation after receiving the LLR sequence.
  • the F operation is to calculate the LLR of each node.
  • the LLR of the node includes the LLR value of the node and the LLR symbol of the node.
  • the LLR of each layer node is determined by the LLR of the two nodes in the upper decoding layer that have a connection relationship with the node.
  • One of the two nodes having a connection relationship with the node is a node located directly above the node and a node located to the right of the node, and the node having a connection relationship in FIG. 5.
  • the LLR of node 1.1 is obtained through LLR1 and LLR9
  • the LLR of node 2.1 is obtained through LLR of node 1.1 and node 1.5
  • the LLR of node 3.1 is obtained through LLR of node 2.1 and node 2.3.
  • LLRs of nodes 1.1 to 1.8, nodes 2.1 to 2.4, nodes 3.1 to 3.2, and nodes 4.1 can be obtained.
  • the LLRs of other nodes cannot be obtained.
  • the GLR is needed to obtain the LLRs of some nodes.
  • the search width L is generally used to limit the number of path expansions.
  • the search width L is the maximum number of paths reserved for path expansion.
  • a path metric Path Metric, PM
  • the PM is determined according to the LLR of the nodes in the path.
  • the LLR of node 3.2 and node 3.1 are added to obtain the LLR of node 4.2; if the Psum value of node 4.1 is 1, the LLR of node 3.2 and node 3.1 are Minus to get the LLR of node 4.2.
  • each decoded graph is obtained.
  • the endpoint 4.2 is judged.
  • each decoded graph also corresponds to two decoding paths, so there are 4 translations for node 4.2. Code path.
  • the path extension is continued through the above-mentioned endpoint decision process, and thus there are eight paths at node 4.3.
  • the LLR of node 4.4 can be obtained.
  • the path extension is continued through the above-mentioned endpoint decision process, and thus there are 16 paths at node 4.4.
  • eight paths are selected according to the PM.
  • PM is determined according to the LLR of each node on the path.
  • each node in the layer 4 decoding layer will make an end point decision, and the node in the layer 4 decoding layer can be called an end point node.
  • the path expansion will be performed during the end point determination. If there are more than 8 paths, they are filtered based on the PM and always remain at 8 paths.
  • node 2.5, node 2.6, node 2.7, and node 2.8 For node 2.5, node 2.6, node 2.7, and node 2.8, theoretically, the decision results of layer 4 nodes u1, u2, u3, and u4 can be passed. You can get the Psum of node 2.1, node 2.2, node 2.3, and node 2.4. According to the Psum of node 2.1, node 2.2, node 2.3, and node 2.4, through the G operation, node 2.5, node 2.6, node 2.7, and node 2.8 can be obtained. Therefore, after multiple F operations and G operations, the left half can be obtained. LLR of all nodes.
  • the decision results of nodes 3.1 and 3.3 are XORed to obtain the Psum of node 2.1
  • the decision results of node 3.2 and 3.4 are XORed to obtain the Psum of node 2.2.
  • the Psum of node 2.3 is equal to the Psum of node 3.3
  • the Psum of node 2.4 is equal to the Psum of node 3.4.
  • the Psum of the nodes 1.1 to 1.8 of the first layer can be obtained by the decision results of the first 8 nodes of the fourth layer and the 8 ⁇ 8 G N.
  • it can also be obtained by XOR.
  • the decision results of node 2.1 and node 2.5 are XORed to obtain the Psum of node 1.1
  • the decision results of node 2.2 and node 2.6 are XORed to obtain the Psum of node 1.2
  • the node 2.3 and node 2.7 are XORed to obtain the Psum of node 1.3
  • Node 2.4 is exclusive-ORed with node 2.8 to get the Psum of node 1.4.
  • the Psum of Node 1.5 is equal to the Psum of Node 2.5
  • the Psum of Node 1.6 is equal to the Psum of Node 2.6
  • the Psum of Node 1.7 is equal to the Psum of Node 2.7
  • the Psum of Node 1.8 is equal to the Psum of Node 2.8.
  • the LLR of node 1.9 to node 1.16 can be obtained through the G operation. So far, the right half of FIG. 6 can be processed by the same algorithm as the left half to obtain the LLR of each node. In this process, the Psum of each preset node is obtained.
  • Psum is Polar G-code decoding process must be used to an intermediate calculation result of the receiving apparatus according to the LLR sequence received from the transmitting device, from the second log 2 N layer to the first layer are sequentially obtained for each recursion Psum of a preset node in the decoding layer. At the same time, the Psum of each preset node is stored in the decoding process.
  • CRC Cyclic Redundancy Code
  • the CRC check is further improved.
  • the receiving device in the embodiment of the present application updates the CRC operation result according to the decision result of the end node after obtaining the decision result of each end node of the log 2 N layer decoding layer, and updates the CRC operation according to the decision result of the last end node. As a result, an updated CRC operation result is obtained, and the receiving device obtains a verification result that passes the verification according to the updated CRC operation result.
  • the judgment is a judgment on the LLR of the node, and the LLR judgment is 0 or 1. For example, if the LLR is greater than or equal to 1, the corresponding judgment result is 1, and if the LLR is less than 1, the judgment result is 0.
  • FIG. 6 can be understood as a decoding diagram corresponding to one decoding path among eight decoding paths.
  • a CRC operation is performed on the decision result to obtain the operation result.
  • the already obtained operation result is updated according to the decision result of node 4.2 to obtain the updated operation result.
  • the CRC operation may be various operations that can be iteratively updated.
  • the decision result of the end node and the LLR can be discarded without further storage, that is, this implementation
  • the check sequence that passes the check is used as the decoded sequence, but during the decision process, the check is performed bit by bit, and the last one is obtained.
  • the final decoded sequence is restored only after the decision bit of the destination node passes the check.
  • a decoded sequence is restored. Compared with the prior art, which needs to restore and store 8 check sequences, it greatly saves storage space.
  • a new intermediate variable LLR can be obtained by F operation.
  • these intermediate variable LLRs are the LLRs of the end node, the LLRs are decidable.
  • the ULR corresponding to the end bit can be obtained.
  • Psum can also be obtained.
  • G operation according to Psum and LLR a new intermediate variable LLR can be obtained, and the loop is iterated until the decoding is completed.
  • a CRC check is performed on the U-part decoding. After the last bit CRC check is passed, the partial decoding is performed according to multiple Psums, and then spliced into a decoded sequence. .
  • search width L 8
  • LLR quantization 6bit quantization
  • code length N 1024.
  • the stored content has three parts: LLR, Psum, and decoding result.
  • FIG. 7 is a schematic block diagram of a receiving device according to an embodiment of the present application. As shown in FIG. 7, the receiving device 70 includes a receiving module 701 and a processing module 702;
  • a receiving module 701 configured to receive a log-likelihood ratio LLR sequence corresponding to a sequence to be decoded
  • a processing module 702 is configured to obtain, according to the LLR sequence, a part of a preset node and a Psum in each decoding layer, where the number of layers of the decoding layer is log 2 N, and N is to be decoded The number of bits in the sequence, where N is an integer;
  • the processing module 702 is further configured to obtain a decoded sequence according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers.
  • processing module 702 is further specifically configured to:
  • the decoding matrix corresponding to the layer is a K ⁇ K matrix.
  • a decoded sequence is obtained according to a decoding result corresponding to each of the decoding layers, where the decoding length corresponding to the decoding result of the Mth layer is N / 2 M.
  • a node position of a preset node in any decoding layer in the any decoding layer is different from a node position of a preset node in other decoding layers in the other decoding layer.
  • processing module 702 is further specifically configured to:
  • a decoded sequence is obtained according to a decoding result and a decoding position corresponding to each of the decoding layers.
  • H is the position of the last preset node among the preset nodes continuously set in the M-1 layer decoding layer.
  • the processing module 702 is further configured to: before obtaining a decoded sequence according to a Psum of a preset node in each of the decoding layers and a decoding matrix corresponding to each of the decoding layers, After obtaining the decision result of each end node of the decoding layer of the log 2 N layer, the CRC operation result is updated according to the decision result of the end node, until the CRC operation result is updated according to the decision result of the last end node, to obtain Update the completed CRC operation result;
  • processing module 702 is specifically configured to:
  • the Psum of the preset node in each decoding layer is sequentially recursively obtained from the log 2 N layer to the first layer.
  • the polarization code decoding device provided in the embodiment of the present application can be used to perform the foregoing polarization code decoding method, and the implementation manner and technical effect thereof are similar, which will not be repeated here in this embodiment.
  • processing module in the foregoing receiving device may be implemented as a processor, and the receiving module may be implemented as a receiver.
  • FIG. 8 is a schematic diagram of a hardware structure of a receiving device provided by the present application. As shown in FIG. 8, the receiving device 80 includes: a processor 801 and a memory 802;
  • a memory 802 configured to store a computer program
  • the processor 801 is configured to execute a computer program stored in a memory to implement each step in the foregoing decoding method. For details, refer to related descriptions in the foregoing method embodiments.
  • the memory 802 may be independent or integrated with the processor 1401.
  • the receiving device 80 may further include:
  • the bus 803 is configured to connect the memory 802 and the processor 801.
  • the receiving device in FIG. 8 may further include a receiver 804 for receiving a log-likelihood ratio LLR sequence corresponding to the sequence to be decoded.
  • the receiving device may be a terminal or a network device.
  • the receiving device is a terminal or a network device
  • a schematic diagram of a receiving device is provided in this embodiment, which is described in detail below with reference to FIG. 9.
  • FIG. 9 is a hardware schematic diagram of a terminal device or a network device provided in an embodiment of the application.
  • the terminal device 90 or the network device 90 includes a transmitter 91, a receiver 92, and a processor 93.
  • the processor 93 may also be a controller, which is shown as "controller / processor 93" in FIG. 9.
  • it may further include a modem processor 95, where the modem processor 95 may include an encoder 96, a modulator 97, a decoder 98, and a demodulator 92.
  • the transmitter 91 is configured to send an encoded sequence.
  • the receiver 92 adjusts (eg, filters, amplifies, downconverts, and digitizes, etc.) the sequence to be decoded received from the antenna.
  • the encoder 96 encodes data to be transmitted.
  • the modulator 97 further processes (e.g., symbol maps and modulates) the encoded data.
  • the demodulator 92 processes (e.g., demodulates) a sequence to be decoded.
  • the decoder 98 processes (e.g., deinterleaving and decoding) to obtain a decoded sequence.
  • the encoder 96, the modulator 97, the demodulator 92, and the decoder 98 may be implemented by a synthesized modem processor 95. It should be noted that when the terminal device or the network device does not include the modem processor 95, the above functions of the modem processor 95 may also be completed by the processor 93.
  • the processor 93 performs control and management, and is configured to execute the decoding process in the foregoing embodiment of the present invention.
  • the memory 94 is used to store program code and data.
  • An embodiment of the present application further provides a storage medium, where the storage medium includes a computer program, and the computer program is used to implement the decoding method described above.
  • An embodiment of the present application further provides a chip, including: a memory and a processor;
  • the memory is used to store program instructions
  • the processor is configured to call the program instructions stored in the memory to implement the decoding method as described above.
  • An embodiment of the present application further provides a program product, where the program product includes a computer program, and the computer program is stored in a storage medium, and the computer program is used to implement the foregoing decoding method.
  • the steps of the method or algorithm described in connection with the disclosure of the embodiments of the present invention may be implemented in a hardware manner, or may be implemented in a manner that a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules.
  • Software modules can be stored in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), erasable programmable read-only memory (ROM Erasable (Programmable ROM, EPROM), electrically erasable and programmable read-only memory (EPROM), registers, hard disks, mobile hard disks, read-only optical disks (CD-ROMs), or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may be located in a base station or a terminal.
  • the processor and the storage medium may also exist in the receiving device as discrete components.
  • processor may be a central processing unit (English: Central Processing Unit, CPU for short), other general-purpose processors, digital signal processors (English: Digital Signal Processor, DSP), application specific integrated circuits (English: Application Specific Integrated Circuit, ASIC for short).
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the invention can be directly embodied as being executed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory may include high-speed RAM memory, and may also include non-volatile storage NVM, such as at least one disk memory, and may also be a U disk, a mobile hard disk, a read-only memory, a magnetic disk, or an optical disk.
  • NVM non-volatile storage
  • the bus may be an Industry Standard Architecture (ISA) bus, an External Device Interconnect (PCI) bus, or an Extended Industry Standard Architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI External Device Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like.
  • the bus in the drawings of the present application is not limited to only one bus or one type of bus.
  • the above storage medium may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Except programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable except programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.
  • At least one means one or more, and “multiple” means two or more.
  • “And / or” describes the association relationship between related objects, and indicates that there can be three kinds of relationships. For example, A and / or B can indicate: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural. The character “/” generally indicates that the related objects are an "or” relationship. "At least one or more of the following” or similar expressions refers to any combination of these items, including any combination of single or plural items.
  • At least one (a) of a, b, or c can be expressed as: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • the functions described in the embodiments of the present invention may be implemented by hardware, software, firmware, or any combination thereof.
  • the functions may be stored on a computer-readable medium or transmitted as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a logical function division.
  • multiple modules may be combined or integrated.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, which may be electrical, mechanical or other forms.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each module may exist separately physically, or two or more modules may be integrated into one unit.
  • the units formed by the above modules may be implemented in the form of hardware or in the form of hardware plus software functional units.

Abstract

一种极化码的译码方法及设备,该方法包括:接收设备接收待译码序列对应的对数似然比LLR序列(S202);所述接收设备根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数(S203);所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列(S204)。该方法可以节省译码存储空间。

Description

极化码的译码方法及设备
本申请要求于2018年9月14日提交中国专利局、申请号为2018110715013、申请名称为“极化码的译码方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种极化码的译码方法及设备。
背景技术
通信系统通常采用信道编译码提高数据传输的可靠性,以保证通信的质量。土耳其教授Arikan提出的极化码(Polar codes)是第一个理论上可以达到香农容量且具有低编译码复杂度的好码。因此,Polar码在5G中具有很大的发展和应用前景。
目前常用的Polar码译码通常采用顺序消除列表(Successive Cancellation list,SCL)译码。在SCL译码过程中,在每个译码阶段会产生L组译码结果,并且产生各译码阶段之间的关联指针。在此过程中,存储L份的译码结果,以及L份的译码指针。在译码完成以后,通过最后一次的译码指针作为入口从后往前逐个恢复译码结果。
然而,在SCL译码过程中,由于需要存储译码结果和译码指针,导致大量的存储空间被占用。
发明内容
本申请实施例提供一种极化码的译码方法及设备,以在译码过程中节省存储空间。
第一方面,本申请实施例提供一种极化码的译码方法,包括:
接收设备接收待译码序列对应的对数似然比LLR序列;
所述接收设备根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数;Psum是Polar码译码中G运算用到的中间结果,
Figure PCTCN2019105033-appb-000001
该u为终点节点的LLR的判决结果,G N为译码矩阵,所有的预设节点的位置,覆盖了整个译码比特所处的位置;
所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列,通过预设节点的Psum,就可以恢复出译码后序列,不需要存储译码指针,节省了存储空间。
在一种可能的设计中,所述接收设备根据所述每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到译码后序列,包括:
所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的 译码矩阵,得到每个所述译码层对应的译码结果;其中,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,所述1≤M≤log 2N,所述K=N/2 M,所述M为整数;通过
Figure PCTCN2019105033-appb-000002
可以得到每层译码层对应的译码结果,该G N为K×K矩阵,该Psum为K×1的行向量;
所述接收设备根据每个所述译码层对应的译码结果,得到译码后序列,其中,第M层译码结果对应的译码长度为N/2 M,即在得到每个译码层的译码结果后,拼接得到译码后序列。
在一种可能的设计中,任一译码层中的预设节点在所述任一译码层中的节点位置与其它译码层中的预设节点在所述其它译码层中的节点位置不同。
在一种可能的设计中,所述接收设备根据每个所述译码层对应的译码结果,得到译码后序列,包括:
所述接收设备根据每个所述译码层中的预设节点所处的节点位置,确定每个所述译码层对应的译码结果在所述译码后序列中的译码位置,其中,每个所述译码层中的预设节点连续设置,所述译码位置对应所述连续设置的预设节点在译码层中所处的位置;
所述接收设备根据每个所述译码层对应的译码结果和译码位置,得到译码后序列。即对于各译码层而言,各译码层对应的译码结果在译码后序列中的译码位置与连续设置的预设节点在译码层中所处的位置相同。
在一种可能的设计中,所述第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
在SCL译码过程中,在进行终点节点的LLR的判决后,存在路径扩展的过程,为了避免计算量过大,一般通过搜索宽度L来限制路径扩展的条数。其中,搜索宽度L为路径扩展保留的最大路径条数。在对路径进行删减时,可以借助路径度量(Path Metric,PM)来确定保留或删减的路径。
在一种可能的设计中,所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列之前,还包括:
所述接收设备在得到第log 2N层译码层的每个终点节点的判决结果后,根据所述终点节点的判决结果更新CRC运算结果,直至根据最后一个终点节点的判决结果更新所述CRC运算结果,得到更新完成的CRC运算结果;该CRC运算可以为a(n)=f(a(n–1),x),其中,a(n–1)为上一次CRC运算结果,x为当前终点结果的判决结果;
所述接收设备根据所述更新完成的CRC运算结果,得到校验通过的校验结果。
通过本实施例的CRC校验,在根据每个终点节点的判决结果进行CRC运算结果的更新操作后,就可以将该终点节点的判决结果以及LLR进行丢弃,不需要再进行存储,即本实施例不需要先恢复出校验序列,然后对校验序列进行校验,将通过校验的校验序列作为译码后序列,而是在判决过程中,逐比特进行校验,在得到最后一个终点节点的判决比特校验通过后,才恢复出最终的译码后序列。本申请实施例恢复出一条译码后序列,相对于现有技术需要恢复并存储多条校验序列而言,大大节省了存储空间。
在一种可能的设计中,所述接收设备根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,包括:
所述接收设备根据所述LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的Psum。
第二方面,本申请实施例提供一种接收设备,该接收设备可以为终端设备或网络设备,该接收设备包括:
接收模块,用于接收待译码序列对应的对数似然比LLR序列;
处理模块,用于根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数;
所述处理模块还用于:根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列。
在一种可能的设计中,所述处理模块还具体用于:
根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到每个所述译码层对应的译码结果;其中,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,所述1≤M≤log 2N,所述K=N/2 M,所述M为整数;
根据每个所述译码层对应的译码结果,得到译码后序列,其中,第M层译码结果对应的译码长度为N/2 M
在一种可能的设计中,任一译码层中的预设节点在所述任一译码层中的节点位置与其它译码层中的预设节点在所述其它译码层中的节点位置不同。
在一种可能的设计中,所述处理模块还具体用于:
根据每个所述译码层中的预设节点所处的节点位置,确定每个所述译码层对应的译码结果在所述译码后序列中的译码位置,其中,每个所述译码层中的预设节点连续设置,所述译码位置对应所述连续设置的预设节点在译码层中所处的位置;
根据每个所述译码层对应的译码结果和译码位置,得到译码后序列。
在一种可能的设计中,所述第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
在一种可能的设计中,所述处理模块还用于:在根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列之前,在得到第log 2N层译码层的每个终点节点的判决结果后,根据所述终点节点的判决结果更新CRC运算结果,直至根据最后一个终点节点的判决结果更新所述CRC运算结果,得到更新完成的CRC运算结果;
根据所述更新完成的CRC运算结果,得到校验通过的校验结果。
在一种可能的设计中,所述处理模块具体用于:
根据所述LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的Psum。
第三方面,本申请实施例提供一种接收设备,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序执行如上第一方面或第一方面各种可能的设计所述的方法。
第四方面,本申请实施例提供一种存储介质,所述存储介质包括计算机程序,所述计算机程序用于实现如上第一方面或第一方面各种可能的设计所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如上第一方面或第一方面各种可能的设计所述的方法。
第六方面,本申请实施例提供一种芯片,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器用于从所述存储器中调用并运行所述计算机程序,使得所述处理器执行如上第一方面或第一方面各种可能的设计所述的方法。
本实施例提供的极化码的译码方法及设备,接收设备接收待译码序列对应的对数似然比LLR序列,接收设备根据LLR序列,得到每个译码层中的预设节点的部分和Psum,接收设备根据每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到译码后序列,本实施例通过预设节点的Psum,就可以恢复出译码后序列,不需要存储译码指针,节省了存储空间。
附图说明
图1为本申请提供的一种发送设备和接收设备的系统架构示意图;
图2为本申请实施例提供的极化码的译码方法的信令流程图;
图3为本申请实施例提供的译码图的结构示意图;
图4为本申请实施例提供的极化码的译码方法的流程示意图;
图5为本申请实施例提供的译码过程示意图一;
图6为本申请实施例提供的译码过程示意图二;
图7为本申请实施例提供的接收设备的模块示意图;
图8为本申请实施例提供的接收设备的硬件示意图;
图9为申请实施例提供的终端设备或网络设备的硬件示意图。
具体实施方式
本发明实施例描述的网络架构以及业务场景是为了更加清楚的说明本发明实施例的技术方案,并不构成对于本发明实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本发明实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例可以应用于无线通信系统,需要说明的是,本申请实施例提及的无线通信系统包括但不限于:窄带物联网系统(Narrow Band-Internet of Things,NB-IoT)、全球移动通信系统(Global System for Mobile Communications,GSM)、增强型数据速率GSM演进系统(Enhanced Data rate for GSM Evolution,EDGE)、宽带码分多址系统(Wideband Code Division Multiple Access,WCDMA)、码分多址2000系统(Code Division Multiple Access,CDMA2000)、时分同步码分多址系统(Time Division-Synchronization Code Division Multiple Access,TD-SCDMA),长期演进系统(Long Term Evolution,LTE)以及下一代5G移动通信系统,例如5G的三大应用场景增强型移动宽带(Enhanced Mobile Broad Band,eMBB)、 URLLC以及大规模机器通信(Massive Machine-Type Communications,mMTC)。
本申请涉及的通信装置主要包括网络设备或者终端设备。本申请中的发送设备为网路设备,则接收设备为终端设备;本申请中的发送设备为终端设备,则接收设备为网络设备。
在本申请实施例中,终端设备(terminal device)包括但不限于移动台(MS,Mobile Station)、移动终端(Mobile Terminal)、移动电话(Mobile Telephone)、手机(handset)及便携设备(portable equipment)等,该终端设备可以经无线接入网(RAN,Radio Access Network)与一个或多个核心网进行通信,例如,终端设备可以是移动电话(或称为“蜂窝”电话)、具有无线通信功能的计算机等,终端设备还可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置或设备。
在本申请实施例中,网络设备可以是用于与终端设备进行通信的设备,例如,可以是GSM系统或CDMA中的基站(Base Transceiver Station,BTS),也可以是WCDMA系统中的基站(NodeB,NB),还可以是LTE系统中的演进型基站(Evolutional Node B,eNB或eNodeB),或者该网络设备可以为中继站、接入点、车载设备、可穿戴设备以及未来5G网络中的网络侧设备或未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)中的网络设备等。需要说明的是,当本发明实施例的方案应用于5G系统或未来可能出现的其他系统时,基站、终端的名称可能发生变化,但这并不影响本发明实施例方案的实施。
本申请的通信系统可以包括发送设备和接收设备,图1为本申请提供的一种发送设备和接收设备的系统架构示意图,如图1所示,其中,发送设备为编码端,可以用于polar编码和输出编码后序列,编码后序列在信道上传输至译码侧;接收设备为译码端,可以用于接收发送设备发送的待译码序列(即编码后序列),并对该待译码序列进行译码。在图1所示的实施例中,以网络设备为编码端,终端设备为译码端为例进行说明;对于编码端为终端设备,译码端为网络设备的实现方式类似,本实施例此处不再赘述。
其中,Polar码是一种线性块码,其生成矩阵为G N,编码过程为u NG N=x N,其中u N=(u 1,u 2,...,u N)是一个二进制的行矢量,长度为N(即母码长度);G N是一个N×N的矩阵,且
Figure PCTCN2019105033-appb-000003
这里矩阵
Figure PCTCN2019105033-appb-000004
定义为log 2N个矩阵F 2的克罗内克(Kronecker)乘积;以上涉及的加法、乘法操作均为二进制伽罗华域(Galois Field)上的加法、乘法操作。
Polar码的编码过程中,u中的一部分比特用来携带信息,称为信息比特,这些比特的索引的集合记作A;另外的一部分比特置为收发端预先约定的固定值,称之为冻结比特(固定比特),其索引的集合用A的补集A c表示。不失一般性,这些冻结比特通常被设为0,只需要收发端预先约定,冻结比特可以被任意设置。
Polar码基于串行抵消(Successive Cancellation,SC)译码算法或串行抵消列表(SC List,SCL)译码算法等进行译码。其中,SC译码算法,即从第1个比特开始顺序译码。串行抵消列表译码算法是对SC译码算法的改进,在每个比特保留多个候选译码路径,完成全部比特的译码后根据一定准则对列表中所有候选译码路径进行选择,得到最终译码结果。
SCL译码器在每个译码阶段会产生L组译码结果,并且产生每个译码阶段之间的关联指针。在译码完成以后,通过最后一次的译码指针作为入口从后往前逐个恢复译码结果。这 样做的缺点是需要存L份的译码结果,以及L份的译码指针。若母码长度是N,则需要存储的空间是N*L+N*L*log2(L)。
为了解决SCL译码过程中占用存储空间过大的问题,本申请实施例提供一种极化码的译码方法。下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
图2为本申请实施例提供的极化码的译码方法的信令流程图。如图2所示,该方法包括:
S201、发送设备向接收设备发送待译码序列。
S202、接收设备接收待译码序列对应的对数似然比LLR序列。
发送设备对信息比特和冻结比特进行编码,编码后得到编码后序列,其中,编码后序列为二进制的序列。本实施例对信息比特和冻结比特进行编码所采用的编码构造方式不做特别限制。
发送设备通过信道将编码后序列发送给接收设备。该编码后序列经过信道传输后,变换为对数似然比(Log Likehood Ratio,LLR)序列。具体地,发送设备发比特1还是比特0,接收设备都可能误判,接收设备在接收到一个信号后,正确判为0的概率与正确判为1的概率的比值就是似然比,再取个自然对数就是对数似然比。
S203、接收设备根据LLR序列,得到每个译码层中的预设节点的部分和(Partial Sum,Psum,其中,译码层的层数为log 2N,N为待译码序列中的比特的数量,N为2的整数次方。
接收设备在接收到LLR序列后,根据LLR序列进行F运算和G运算,以得到每个译码层中的预设节点的Psum。其中,该译码层是针对SCL译码提出的译码图中的译码层。该译码图中的译码层的层数为log 2N,N为待译码序列中的比特的数量,N为2的整数次方。本领域技术人员可以理解,该N也即母码长度。Psum是Polar码译码过程中G运算必须要用到的中间结果,其值
Figure PCTCN2019105033-appb-000005
其中u是译码结果,G N为上述的生成矩阵,本领域技术人员可以理解,在译码侧G N也可称为译码矩阵。Psum的值在译码过程中需存在于Polar码的译码器中。对于F运算和G运算的具体实现方式,在后续实施例中会进行详细说明。
可选地,预设节点的总数与待译码序列中的比特的数量相等,任一译码层中的预设节点在任一译码层中的节点位置与其它译码层中的预设节点在其它译码层中的节点位置不同。由此,所有的预设节点的位置,覆盖了整个译码比特所处的位置。
可选地,第M层译码层中的预设节点的数量为N/2 M,1≤M≤log 2N,M为整数;
第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
下面以N=16为例进行详细说明,对于N为其他长度的实施例,其实现方式类似,本实施例此处不再赘述。图3为本申请实施例提供的译码图的结构示意图。如图3所示,N=16,在该译码图中译码层数为4。其中,第一层上方的16个黑色节点对应从发送设备接收的长度为16的LLR序列。
第1层、第2层、第3层、第4层译码层中的预设节点的数量为依次为8、4、2、1。
第1层译码层中连续设置的预设节点所处的位置为第1位至第8位;
第2层译码层中连续设置的预设节点所处的位置为第9位至第12位;
第3层译码层中连续设置的预设节点所处的位置为第13位至第14位。
第4层译码层中预设节点所处的位置为第15位。
通过F运算和G运算,可以得到每个译码层中的预设节点的Psum,即得到了如图3所示灰色节点的Psum。
S204、接收设备根据每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到译码后序列。
接收设备在得到每个译码层中的预设节点的Psum之后,根据每个译码层中的预设节点的PSUM以及每个译码层对应的译码矩阵,得到译码后序列。
可选地,接收设备根据每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到每个译码层对应的译码结果,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,接收设备根据每个译码层对应的译码结果,得到译码后序列。
具体地,通过
Figure PCTCN2019105033-appb-000006
可以从Psum中恢复出U。其中,
Figure PCTCN2019105033-appb-000007
为单位矩阵,所以
Figure PCTCN2019105033-appb-000008
上述的K×K矩阵即为此处的译码矩阵G N
在本实施例中,译码后序列是多个译码结果进行拼接得到的序列。可选地,接收设备根据每个译码层中的预设节点所处的节点位置,确定每个译码层对应的译码结果在译码后序列中的译码位置,其中,每个译码层中的预设节点连续设置,译码位置对应连续设置的预设节点在译码层中所处的位置;接收设备根据每个译码层对应的译码结果和译码位置,得到译码后序列。即对于各译码层而言,各译码层对应的译码结果在译码后序列中的译码位置与连续设置的预设节点在译码层中所处的位置相同。
请继续参照图3,第1层中有8个预设节点,该8个预设节点的Psum,即长度为8的行向量[P1、P2、P3、P4、P5、P6、P7、P8],称为第一层行向量,将该第一层行向量乘以8×8的G N矩阵,可以得到第1位至第8位的译码结果,即[u1、u2、u3、u4、u5、u6、u7、u8]。
第2层中有4个预设节点,该4个预设节点的PUSM,即长度为4的行向量[P9、P10、P11、P12],称为第二层行向量,将该第二层行向量乘以4×4的G N矩阵,可以得到第9位至第12位的译码结果,即[u9、u10、u11、u12]。
第3层中有2个预设节点,该2个预设节点的PUSM,即长度为2的行向量[P13、P14],称为第三层行向量,将该第三层行向量乘以2×2的G N矩阵,可以得到第13位至第14位的译码结果,即[u13、u14]。
第4层中有1个预设节点,该1个预设节点的PUSM,即长度为1的行向量[P15],称为第四层行向量,将该第四层行向量乘以1×1的G N矩阵,可以得到第15位的译码结果,即[u15]。
其中,上述的8×8的G N矩阵、4×4的G N矩阵、2×2的G N矩阵以及1×1的G N矩阵即为每层对应的译码矩阵。
由此,根据[u1、u2、u3、u4、u5、u6、u7、u8]、[u9、u10、u11、u12]、[u13、u14]、[u15],再结合最后一个译码比特,就可以恢复出全部的译码比特,最终得到译码后序列为[u1、u2、u3、u4、u5、u6、u7、u8、u9、u10、u11、u12、u13、u14、u15、u16]。
本实施例提供的极化码的译码方法,接收设备接收待译码序列对应的对数似然比LLR序列,接收设备根据LLR序列,得到每个译码层中的预设节点的部分和Psum,接收设备根据每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到译码后序列,本实施例通过预设节点的Psum,就可以恢复出译码后序列,不需要存储译码指针,节省了存储空间。
下面结合图4至图6,对本实例提供的极化码的译码方法的实现过程进行详细说明。在本实施例中,以N=16为例进行说明,对于N为其它取值时类似,本实施例此处不再赘述。
图4为本申请实施例提供的极化码的译码方法的流程示意图。图5为本申请实施例提供的译码过程示意图一,图6为本申请实施例提供的译码过程示意图二。
如图4和5所示,接收设备在接收到LLR序列后,先进行F运算。其中,F运算是为了计算每个节点的LLR。其中,节点的LLR包括节点的LLR值以及节点的LLR的符号。在F运算中,每层节点的LLR是通过上层译码层中与该节点具有连接关系的两个节点的LLR确定的。与该节点具有连接关系的两个节点一个是位于该节点正上方的节点以及位于该节点右侧的节点,且在图5中具有连接关系的节点。
例如,节点1.1的LLR是通过LLR1与LLR9得到,节点2.1的LLR是通过节点1.1和节点1.5的LLR得到,节点3.1的LLR是通过节点2.1和节点2.3的LLR得到。以节点2.1的LLR为例进行说明。将节点1.1和节点1.5的LLR的绝对值进行比较,将绝对值小的LLR作为节点2.1的LLR值,将节点1.1和节点1.5的LLR的符号取异或,得到的异或结果作为节点2.1的LLR的符号。
由此,经过F运算,如图5所示,可以得到节点1.1至节点1.8、节点2.1至节点2.4、节点3.1至节点3.2以及节点4.1的LLR。其它节点的LLR则无法获取,此时需要借助G运算来获取部分节点的LLR。
同时,在SCL译码过程中,存在路径扩展的过程,为了避免计算量过大,一般通过搜索宽度L来限制路径扩展的条数。其中,搜索宽度L为路径扩展保留的最大路径条数。在对路径进行删减时,可以借助路径度量(Path Metric,PM)来确定保留或删减的路径。该PM是根据路径中的节点的LLR确定的,本实施例对PM的获取过程以及保留或删减路径的过程的实现方式不做特别限制。在下述实施例中,为了便于说明,以L=8为例,进行详细说明。
如图4和图6所示,在得到节点4.1的LLR之后,对节点4.1进行终点判决,此时会进行路径扩展,即将上述得到的内容进行复制存储,得到两个译码图,即对应两条译码路径。其中一份译码图中的节点4.1的LLR的判决结果u1为0,另一份译码图中的节点4.2的LLR的判决结果u1为1。而节点4.1的
Figure PCTCN2019105033-appb-000009
其中G N为1×1矩阵。在G运算过程中,若节点4.1的Psum值为0,则节点3.2和节点3.1的LLR相加,得到节点4.2的LLR;若节点4.1的Psum值为1,则节点3.2和节点3.1的LLR相减,得到节点4.2的LLR。
通过上述的方式,得到两份译码图,针对每份译码图,对节点4.2进行终点判决,此时每份译码图同样对应两条译码路径,由此针对节点4.2存在4条译码路径。
针对节点4.1和节点4.2的判决结果u1和u2,根据
Figure PCTCN2019105033-appb-000010
可以得到节 点3.1和节点3.2的Psum。根据节点3.1的Psum以及节点2.1和节点2.3的LLR,通过G运算可以得到节点3.3的LLR,根据节点3.2的Psum以及节点2.2和节点2.4的LLR,通过G运算可以得到节点3.4的LLR。在得到节点3.3和节点3.4的LLR之后,再通过F运算,可以得到节点4.3的LLR。
在节点4.3处,通过上述的终点判决过程继续进行路径扩展,由此在节点4.3处存在8条路径。同理,通过节点4.3的Psum进行G运算,可以得到节点4.4的LLR。
在节点4.4处,通过上述的终点判决过程继续进行路径扩展,由此在节点4.4处存在16条路径。然而,在搜索宽度L=8时,则会根据PM选择8条路径。其中,PM是根据该条路径上的每个节点的LLR确定的。本领域技术人员可以理解,第4层译码层中的每个节点为会进行终点判决,第4层译码层中的节点可以称为终点节点,同时在终点判决时会进行路径扩展,若路径条数超过8条,则根据PM进行筛选,始终保持在8条路径。
由此,节点3.3、节点3.4、节点4.2、节点4.3、节点4.4的LLR都得以获取。
针对节点2.5、节点2.6、节点2.7、节点2.8而言,理论上可通过第4层节点的判决结果u1、u2、u3、u4,根据
Figure PCTCN2019105033-appb-000011
可以得到节点2.1、节点2.2、节点2.3以及节点2.4的Psum。根据节点2.1、节点2.2、节点2.3以及节点2.4的Psum,通过G运算,可以得到节点2.5、节点2.6、节点2.7、节点2.8,由此,经过多次F运算和G运算,可以得到左半部分所有节点的LLR。
可选地,还可以通过取异或的方式来获取。具体地,节点3.1与节点3.3的判决结果取异或得到节点2.1的Psum,节点3.2与节点3.4的判决结果取异或得到节点2.2的Psum。节点2.3的Psum与节点3.3的Psum相等,节点2.4的Psum与节点3.4的Psum相等。
同理,对于第一层的节点1.1至节点1.8的Psum,可以通过第四层的前8个节点的判决结果和8×8的G N得到。可选地,也可以通过取异或的方式来获取。
具体地,节点2.1与节点2.5的判决结果取异或得到节点1.1的Psum,节点2.2与节点2.6的判决结果取异或得到节点1.2的Psum,节点2.3与节点2.7取异或得到节点1.3的Psum,节点2.4与节点2.8取异或得到节点1.4的Psum。节点1.5的Psum与节点2.5的Psum相等,节点1.6的Psum与节点2.6的Psum相等,节点1.7的Psum与节点2.7的Psum相等,节点1.8的Psum与节点2.8的Psum相等。
在得到节点1.1至节点1.8的Psum之后,通过G运算可以得到节点1.9至节点1.16的LLR,至此,图6的右半部分可以采用与左半部分相同的算法处理过程,得到各节点的LLR。在此过程中,得到各预设节点的Psum。
由上述描述可知,Psum是Polar码译码过程中G运算必须要用到的中间结果,接收设备根据从发送设备接收的LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的Psum。同时,在译码过程中存储各预设节点的Psum。
本领域技术人员可以理解,在SCL译码过程中,会存在多条译码路径。在本申请实施例中,会通过(Cyclic Redundancy Code,CRC)校验来确定最终输出的译码后序列。本领域技术人员理解,在编码过程中,接收设备会根据信息比特、冻结比特以及CRC校验码进行编码。
在本实施例中,对CRC校验进一步做了改进。本申请实施例的接收设备在得到第log 2N层译码层的每个终点节点的判决结果后,根据终点节点的判决结果更新CRC运算结果,直 至根据最后一个终点节点的判决结果更新CRC运算结果,得到更新完成的CRC运算结果,接收设备根据更新完成的CRC运算结果,得到校验通过的校验结果。
在上述涉及判决以及终点判决的描述中,判决是对节点的LLR进行的判决,将LLR判决为0或1。例如,LLR大于等于1,则对应判决结果为1,若LLR小于1,则判决结果为0。
继续以图6实施例为例进行说明,该图6实施例可以理解为8条译码路径中的一条译码路径对应的译码图。在得到节点4.1的判决结果后,将该判决结果进行CRC运算,得到运算结果。在得到节点4.2的判决结果后,根据节点4.2的判决结果更新已经得到的运算结果,得到更新后的运算结果。
本实施例对CRC运算的具体实现方式不做特别限制,该CRC运算可以为各种可以迭代更新的运算。例如,该CRC运算可以为a(n)=f(a(n–1),x),其中,a(n–1)为上一次CRC运算结果,x为当前终点结果的判决结果,由此,可以利用终点节点的判决结果不断进行更新,直至最后一个终点节点对CRC运算结果更新完成,然后根据更新完成的CRC运算结果来确定校验是否通过,如果校验通过,则根据图6所述的译码图中的预设节点的PUSM恢复出译码后序列。
通过本实施例的CRC校验,在根据每个终点节点的判决结果进行CRC运算结果的更新操作后,就可以将该终点节点的判决结果以及LLR进行丢弃,不需要再进行存储,即本实施例不需要先恢复出校验序列,然后对校验序列进行校验,将通过校验的校验序列作为译码后序列,而是在判决过程中,逐比特进行校验,在得到最后一个终点节点的判决比特校验通过后,才恢复出最终的译码后序列。本申请实施例恢复出一条译码后序列,相对于现有技术需要恢复并存储8条校验序列而言,大大节省了存储空间。
本领域技术人员可以理解,在上述的实施例中,为了描述清楚,译码图中的译码层是根据从上到下进行排序的,在具体实现过程中,还可以存在其它不同的排序方法,只要实现本质一样,都属于本申请的保护范畴。
综上,由图4可知,针对从接收设备接收的LLR以及中间变量LLR,经过F运算可以得到新的中间变量LLR,在该些中间变量LLR为终点节点的LLR时,该些LLR为可判决的LLR,在进行比特判决之后,可以得到终点比特对应的U部分译码,根据该些U部分译码以及译码矩阵,还可以得到Psum。根据Psum和LLR经过G运算,又可以得到新的中间变量LLR,由此循环迭代,直至译码完成。在此过程中,在得到U部分译码时,对该U部分译码进行CRC校验,在最后一个比特CRC校验通过后,根据多个Psum进行部分译码,然后拼接成译码后序列。
下面再以一个具体的实施例,来说明本申请实施例如何节省存储空间。
相关参数:搜索宽度L=8,LLR量化为6比特量化,码长N=1024。
存储的内容有三部分:LLR、Psum、译码结果。
针对LLR以及Psum,本申请与现有技术的存储内容相同:
LLR部分大小=输入LLR+中间变量LLR=1024×6+(512+256+128+64+32+16+8+4+2)×6×8=55200bit
Psum=(512+256+128+64+32+16+8+4+2+1×8=8184bit
结果部分(现有技术)1024×8×(1+log2(8))=32768bit
结果部分(本申请实施例)1024bit
由此得出本申请实施例和现有技术的总存储:
现有技术55200+8184+32768=96152bit
本申请实施例55200+8184+1024=64408bit
节约存储(96152–64408)/96152=33.01%。
由此可知,在码长为1024,搜索宽度为8时,本申请实施例相对于现有技术,节约了33.01%的存储。
图7为本申请实施例提供的接收设备的模块示意图。如图7所示,该接收设备70包括接收模块701和处理模块702;其中
接收模块701,用于接收待译码序列对应的对数似然比LLR序列;
处理模块702,用于根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数;
所述处理模块702还用于:根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列。
可选地,所述处理模块702还具体用于:
根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到每个所述译码层对应的译码结果;其中,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,所述1≤M≤log 2N,所述K=N/2 M,所述M为整数;
根据每个所述译码层对应的译码结果,得到译码后序列,其中,第M层译码结果对应的译码长度为N/2 M
可选地,任一译码层中的预设节点在所述任一译码层中的节点位置与其它译码层中的预设节点在所述其它译码层中的节点位置不同。
可选地,所述处理模块702还具体用于:
根据每个所述译码层中的预设节点所处的节点位置,确定每个所述译码层对应的译码结果在所述译码后序列中的译码位置,其中,每个所述译码层中的预设节点连续设置,所述译码位置对应所述连续设置的预设节点在译码层中所处的位置;
根据每个所述译码层对应的译码结果和译码位置,得到译码后序列。
可选地,所述第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
可选地,所述处理模块702还用于:在根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列之前,在得到第log 2N层译码层的每个终点节点的判决结果后,根据所述终点节点的判决结果更新CRC运算结果,直至根据最后一个终点节点的判决结果更新所述CRC运算结果,得到更新完成的CRC运算结果;
根据所述更新完成的CRC运算结果,得到校验通过的校验结果。
可选地,所述处理模块702具体用于:
根据所述LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的Psum。
本申请实施例提供的极化码的译码设备,可用于执行上述的极化码的译码方法,其实现方式和技术效果类似,本实施例此处不再赘述。
应理解,上述接收设备中的处理模块可以被实现为处理器,接收模块可以被实现为接收器。
图8为本申请提供的接收设备的硬件结构示意图。如图8所示,该接收设备80包括:处理器801以及存储器802;其中
存储器802,用于存储计算机程序;
处理器801,用于执行存储器存储的计算机程序,以实现上述译码方法中的各个步骤。具体可以参见前面方法实施例中的相关描述。
可选地,存储器802既可以是独立的,也可以跟处理器1401集成在一起。
当所述存储器802是独立于处理器801之外的器件时,所述接收设备80还可以包括:
总线803,用于连接所述存储器802和处理器801。图8的接收设备还可以进一步包括接收器804,用于接收待译码序列对应的对数似然比LLR序列。
在本申请实施例中,该接收设备可以为终端或网络设备,针对该接收设备为终端或网络设备时,本实施例给出一种接收设备的示意图,下面结合图9进行详细说明。
图9为申请实施例提供的终端设备或网络设备的硬件示意图。所述终端设备90或网络设备90包括发射器91,接收器92和处理器93。其中,处理器93也可以为控制器,图9中表示为“控制器/处理器93”。可选的,还可以包括调制解调处理器95,其中,调制解调处理器95可以包括编码器96、调制器97、解码器98和解调器92。
在一个示例中,发射器91用于发送编码后序列。接收器92调节(例如,滤波、放大、下变频以及数字化等)从天线接收的待译码序列。在调制解调处理器95中,编码器96对待发送的数据进行编码。调制器97进一步处理(例如,符号映射和调制)编码后的数据。解调器92处理(例如,解调)待译码序列。解码器98处理(例如,解交织和解码),得到译码后序列。编码器96、调制器97、解调器92和解码器98可以由合成的调制解调处理器95来实现。需要说明的是,当终端设备或网络设备不包括调制解调处理器95时,调制解调处理器95的上述功能也可以由处理器93完成。
处理器93进行控制管理,用于执行上述本发明实施例中的译码过程。存储器94用于存储程序代码和数据。
本申请实施例还提供一种存储介质,所述存储介质包括计算机程序,所述计算机程序用于实现如上所述的译码方法。
本申请实施例还提供一种芯片,包括:存储器和处理器;
所述存储器,用于存储程序指令;
所述处理器,用于调用所述存储器中存储的所述程序指令以实现如上所述的译码方法。
本申请实施例还提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在存储介质中,所述计算机程序用于实现上述的译码方法。
结合本发明实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM, EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于基站或终端中。当然,处理器和存储介质也可以作为分立组件存在于接收设备中。
应理解,上述处理器可以是中央处理单元(英文:Central Processing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(英文:Digital Signal Processor,简称:DSP)、专用集成电路(英文:Application Specific Integrated Circuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合发明所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,还可以为U盘、移动硬盘、只读存储器、磁盘或光盘等。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
上述存储介质可以是由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。存储介质可以是通用或专用计算机能够存取的任何可用介质。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相 互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个单元中。上述模块成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。

Claims (18)

  1. 一种极化码的译码方法,其特征在于,包括:
    接收设备接收待译码序列对应的对数似然比LLR序列;
    所述接收设备根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数;
    所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列。
  2. 根据权利要求1所述的方法,其特征在于,所述接收设备根据所述每个译码层中的预设节点的Psum以及每个译码层对应的译码矩阵,得到译码后序列,包括:
    所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到每个所述译码层对应的译码结果;其中,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,所述1≤M≤log 2N,所述K=N/2 M,所述M为整数;
    所述接收设备根据每个所述译码层对应的译码结果,得到译码后序列,其中,第M层译码结果对应的译码长度为N/2 M
  3. 根据权利要求2所述的方法,其特征在于,任一译码层中的预设节点在所述任一译码层中的节点位置与其它译码层中的预设节点在所述其它译码层中的节点位置不同。
  4. 根据权利要求2或3所述的方法,其特征在于,所述接收设备根据每个所述译码层对应的译码结果,得到译码后序列,包括:
    所述接收设备根据每个所述译码层中的预设节点所处的节点位置,确定每个所述译码层对应的译码结果在所述译码后序列中的译码位置,其中,每个所述译码层中的预设节点连续设置,所述译码位置对应所述连续设置的预设节点在译码层中所处的位置;
    所述接收设备根据每个所述译码层对应的译码结果和译码位置,得到译码后序列。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述接收设备根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列之前,还包括:
    所述接收设备在得到第log 2N层译码层的每个终点节点的判决结果后,根据所述终点节点的判决结果更新CRC运算结果,直至根据最后一个终点节点的判决结果更新所述CRC运算结果,得到更新完成的CRC运算结果;
    所述接收设备根据所述更新完成的CRC运算结果,得到校验通过的校验结果。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述接收设备根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,包括:
    所述接收设备根据所述LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的Psum。
  8. 一种接收设备,其特征在于,包括:
    接收模块,用于接收待译码序列对应的对数似然比LLR序列;
    处理模块,用于根据所述LLR序列,得到每个译码层中的预设节点的部分和Psum,其中,所述译码层的层数为log 2N,所述N为待译码序列中的比特的数量,所述N为整数;
    所述处理模块还用于:根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列。
  9. 根据权利要求8所述的设备,其特征在于,所述处理模块还具体用于:
    根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到每个所述译码层对应的译码结果;其中,第M层译码层对应的译码矩阵为K×K矩阵,所述第M层译码层中的预设节点的数量为N/2 M,所述1≤M≤log 2N,所述K=N/2 M,所述M为整数;
    根据每个所述译码层对应的译码结果,得到译码后序列,其中,第M层译码结果对应的译码长度为N/2 M
  10. 根据权利要求9所述的设备,其特征在于,任一译码层中的预设节点在所述任一译码层中的节点位置与其它译码层中的预设节点在所述其它译码层中的节点位置不同。
  11. 根据权利要求9或10所述的设备,其特征在于,所述处理模块还具体用于:
    根据每个所述译码层中的预设节点所处的节点位置,确定每个所述译码层对应的译码结果在所述译码后序列中的译码位置,其中,每个所述译码层中的预设节点连续设置,所述译码位置对应所述连续设置的预设节点在译码层中所处的位置;
    根据每个所述译码层对应的译码结果和译码位置,得到译码后序列。
  12. 根据权利要求8至11任一项所述的设备,其特征在于,第M层译码层中连续设置的预设节点在译码层中所处的位置为第F位至第(H+N/2 M)位;其中,F=H+1,所述H为第M-1层译码层中连续设置的预设节点中最后一个预设节点的位置。
  13. 根据权利要求8至12任一项所述的设备,其特征在于,所述处理模块还用于:在根据每个所述译码层中的预设节点的Psum以及每个所述译码层对应的译码矩阵,得到译码后序列之前,在得到第log 2N层译码层的每个终点节点的判决结果后,根据所述终点节点的判决结果更新CRC运算结果,直至根据最后一个终点节点的判决结果更新所述CRC运算结果,得到更新完成的CRC运算结果;
    根据所述更新完成的CRC运算结果,得到校验通过的校验结果。
  14. 根据权利要求8至13任一项所述的设备,其特征在于,所述处理模块具体用于:
    根据所述LLR序列,从第log 2N层至第一层依次递推得到每个译码层中的预设节点的PUSM。
  15. 一种接收设备,其特征在于,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序执行如权利要求1至7任一项所述的方法。
  16. 一种存储介质,其特征在于,所述存储介质包括计算机程序,所述计算机程序用于实现如权利要求1至7任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如权利要求1至7任一项所述的方法。
  18. 一种芯片,其特征在于,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器用于从所述存储器中调用并运行所述计算机程序,使得所述处理器执行如权利要求1至7任一项所述的方法。
PCT/CN2019/105033 2018-09-14 2019-09-10 极化码的译码方法及设备 WO2020052537A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811071501.3A CN110912567A (zh) 2018-09-14 2018-09-14 极化码的译码方法及设备
CN201811071501.3 2018-09-14

Publications (1)

Publication Number Publication Date
WO2020052537A1 true WO2020052537A1 (zh) 2020-03-19

Family

ID=69776972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105033 WO2020052537A1 (zh) 2018-09-14 2019-09-10 极化码的译码方法及设备

Country Status (2)

Country Link
CN (1) CN110912567A (zh)
WO (1) WO2020052537A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079382A (zh) * 2014-07-25 2014-10-01 北京邮电大学 一种基于概率计算的极化码译码器和极化码译码方法
CN105049061A (zh) * 2015-04-28 2015-11-11 北京邮电大学 基于超前计算的高维基极化码译码器和极化码译码方法
CN106253911A (zh) * 2016-08-03 2016-12-21 东南大学 一种软件极化码的连续消除列表译码方法
US20170222754A1 (en) * 2016-01-28 2017-08-03 Lg Electronics Inc. Error correcting coding method based on cross-layer error correction with likelihood ratio and apparatus thereof
CN107204780A (zh) * 2017-04-25 2017-09-26 东南大学 polar‑LDPC级联码的合并BP解码算法及装置
CN107947803A (zh) * 2017-12-12 2018-04-20 中国石油大学(华东) 一种极化码的快速译码方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9176927B2 (en) * 2011-11-08 2015-11-03 The Royal Institution For The Advancement Of Learning/Mcgill University Methods and systems for decoding polar codes
CN105141322B (zh) * 2015-09-16 2018-09-07 哈尔滨工业大学 一种基于极化码sc译码的部分和方法
WO2018021926A1 (en) * 2016-07-27 2018-02-01 Huawei Technologies Co., Ltd. Decoding of polar codes and polar subcodes
US10425107B2 (en) * 2016-09-09 2019-09-24 Huawei Technologies Co., Ltd. Partial sum computation for polar code decoding
CN108365914B (zh) * 2017-01-26 2023-04-18 华为技术有限公司 Polar码编译码方法及装置
CN107248866B (zh) * 2017-05-31 2020-10-27 东南大学 一种降低极化码译码时延的方法
CN107911124B (zh) * 2017-11-29 2021-04-02 哈尔滨工业大学 一种非递归的sc译码部分和确定方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079382A (zh) * 2014-07-25 2014-10-01 北京邮电大学 一种基于概率计算的极化码译码器和极化码译码方法
CN105049061A (zh) * 2015-04-28 2015-11-11 北京邮电大学 基于超前计算的高维基极化码译码器和极化码译码方法
US20170222754A1 (en) * 2016-01-28 2017-08-03 Lg Electronics Inc. Error correcting coding method based on cross-layer error correction with likelihood ratio and apparatus thereof
CN106253911A (zh) * 2016-08-03 2016-12-21 东南大学 一种软件极化码的连续消除列表译码方法
CN107204780A (zh) * 2017-04-25 2017-09-26 东南大学 polar‑LDPC级联码的合并BP解码算法及装置
CN107947803A (zh) * 2017-12-12 2018-04-20 中国石油大学(华东) 一种极化码的快速译码方法

Also Published As

Publication number Publication date
CN110912567A (zh) 2020-03-24

Similar Documents

Publication Publication Date Title
JP7026763B2 (ja) レートマッチング方法、符号化装置、および通信装置
CN109660264B (zh) 高性能极化码译码算法
US11432186B2 (en) Method and device for transmitting data with rate matching
WO2014173133A1 (zh) 极性码的译码方法和译码装置
CN108365848B (zh) 一种极性码的译码方法和装置
WO2013152605A1 (zh) 极性码的译码方法和译码装置
WO2018166423A1 (zh) 极化码编码的方法和装置
CN108347301B (zh) 数据的传输方法和装置
WO2019201269A1 (zh) 极化码的编译码方法和装置
KR101817168B1 (ko) 극 부호의 근사화된 신뢰전파 복호화 방법 및 장치
CN109547034B (zh) 译码方法及设备、译码器
WO2018171401A1 (zh) 一种信息处理方法、装置及设备
CN108540260B (zh) 用于确定Polar码编解码的方法、装置和可存储介质
WO2016141544A1 (zh) 传输信息的方法和通信设备
WO2019206136A1 (zh) 极化码的速率匹配、解速率匹配方法及设备
CN110476357B (zh) 极化码传输方法和装置
CN109391347B (zh) 编译码方法及装置
US11075715B2 (en) Encoding method and apparatus
WO2020052537A1 (zh) 极化码的译码方法及设备
WO2020042089A1 (zh) Scl并行译码方法、装置及设备
WO2016172937A1 (zh) 一种利用多元极化码进行数据传输的方法、装置
CN108574493B (zh) 数据处理的方法和装置
CN109802690B (zh) 译码方法、装置和计算机可读存储介质
JP7471357B2 (ja) 符号化方法、復号方法、装置、および装置
CN110971337B (zh) 信道编码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19860964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19860964

Country of ref document: EP

Kind code of ref document: A1