WO2020088256A1 - Procédé et dispositif de décodage - Google Patents

Procédé et dispositif de décodage Download PDF

Info

Publication number
WO2020088256A1
WO2020088256A1 PCT/CN2019/111512 CN2019111512W WO2020088256A1 WO 2020088256 A1 WO2020088256 A1 WO 2020088256A1 CN 2019111512 W CN2019111512 W CN 2019111512W WO 2020088256 A1 WO2020088256 A1 WO 2020088256A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoding
variable node
variable
symbol
codewords
Prior art date
Application number
PCT/CN2019/111512
Other languages
English (en)
Chinese (zh)
Inventor
原进宏
解怡轩
康芃
郑晨
魏岳军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020088256A1 publication Critical patent/WO2020088256A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1148Structural properties of the code parity-check or generator matrix

Definitions

  • This application relates to the field of communication technology, and in particular, to a decoding method and device.
  • LDPC Low Density Parity
  • ML decoding For the decoding of LDPC codes, Maximum-Likelihood Decoding (ML) decoding is considered to be the best decoding method, because it can reach the lowest transmission packet when the probability of all transmission code words is equal Error rate performance.
  • K ⁇ 500 For LDPC codes with short information length (K ⁇ 500), if the ML decoding method is directly adopted, the conditional probability of 2 K different codewords needs to be calculated, and its ultra-high decoding complexity is difficult to be used in practical applications. adoption.
  • QML Quasi-Maximum Likelihood
  • this type of decoding algorithm selects a suitable Variable Node (VN) to saturate the information and introduces a method of re-decoding, such as Augmented Belief Propagation (ABP) translation code.
  • VN Variable Node
  • ABSP Augmented Belief Propagation
  • the ABP decoding method uses a multi-level decoding structure. The decoding process is: after a decoding failure, all VNs connected to the check nodes are determined as candidate VN sets, and then multi-level decoding, each level Decoding selects a VN with relatively low reliability from the candidate VN set according to the reliability of the VN. Specifically, in the jth-level decoding, the j VNs with the smallest degree are selected from the candidate VN set.
  • VN with the smallest Log-likelihood Ratio (LLR) value received by the corresponding channel is selected, and then, the corresponding initial channel LLR values of the selected VN are saturated to positive maximum and negative maximum, respectively Create a candidate sequence, re-input the candidate sequence to the decoder for decoding, stop decoding when the preset termination decoding condition is reached, and select the best code word among all the output legal code words.
  • LLR Log-likelihood Ratio
  • the saturated VN is selected according to the reliability of the VN.
  • the reliability of the VN is determined based on the degree of the VN and the LLR value received by the corresponding channel of the VN. The accuracy of sexual estimation is insufficient, and the decoding performance is not high.
  • the present application provides a decoding method and device, which can improve the accuracy of variable node selection, thereby improving decoding performance.
  • the present application provides a decoding method, including: decoding the information to be decoded for the first time, and acquiring the number of symbol inversions of each variable node in the preset set during the first decoding process The number of times the external information on all edges connected to the variable node is inverted during decoding.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable
  • the values are set to preset positive and negative values, respectively, to generate 2 j LLR sequences; the 2 j LLR sequences are decoded separately to obtain decoding output results.
  • the decoding method by acquiring the number of symbol flips for each variable node in the preset set during the first decoding process, when the first decoding fails, select j in the order of the number of symbol flips from most to least Variable nodes, then set the log likelihood ratio LLR values corresponding to the selected j variable nodes to preset positive and negative values, respectively, to generate 2 j LLR sequences, and finally perform 2 j LLR sequences respectively Decode to get the decoding output.
  • the selection is based on the number of variable flips of the variable nodes, which improves the spatial range of variable node selection, which can improve the accuracy of variable node selection, and thus improve the decoding performance.
  • variable node corresponding to the minimum number of symbol flip times among the j variable nodes includes an error in a variable node having the minimum number of symbol flip times in the preset set and the connected check node
  • the code word puncturing in standard code decoding is considered, so that good accuracy can be ensured even in the case of code word puncturing.
  • the decoding of 2 j LLR sequences respectively to obtain a decoding output result includes:
  • the codeword with the smallest Euclidean distance of the LLR sequence corresponding to the information to be decoded from the codewords successfully decoded is selected as the decoding output result.
  • the present application provides a decoding method, including:
  • the number of symbol reversals is the decoding of external information on all edges connected to the variable node The number of times the symbol is flipped during the process.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • each variable node According to the number of symbol inversions of each variable node obtained by the previous decoding, select the one variable node with the largest number of symbol inversions; set the LLR value of the log likelihood ratio corresponding to the selected one variable node as Set the positive and negative values to generate 2 LLR sequences; decode the 2 LLR sequences separately to obtain 2 codewords, and update each variable node in the preset set for the 2 codewords For the number of times the symbol of the codeword is flipped; when the preset decoding termination condition is not met, for the two codewords, the next level of decoding is performed according to the decoding process; when the preset decoding termination is satisfied When the condition is terminated, the decoding is terminated, and the final decoding output result is obtained according to all code words obtained by decoding.
  • multi-level decoding is performed according to the following decoding process: The number of sign flips for each variable node obtained by decoding at the previous level, select the one variable node with the largest number of sign flips, and set the LLR values corresponding to the selected one variable node to preset positive and negative values, respectively Value, generate 2 LLR sequences, decode the 2 LLR sequences separately to obtain 2 codewords, and update the number of symbol flips for the codeword for each variable node in the preset set for the 2 codewords, When the preset decoding termination condition is not satisfied, the next level of decoding is performed according to the above decoding process for the two codewords, and the decoding is terminated when the preset decoding termination condition is satisfied, and finally based on all the codes obtained by decoding Word to get the final decoding output.
  • the next stage of decoding according to the decoding process includes:
  • the next decoding is terminated for the codeword successfully decoded in the two codewords, and the codeword successfully decoded in the two codewords is stored.
  • the next level of decoding is terminated for the codewords that have been successfully decoded, and only the codewords that failed to be decoded are decoded to the next level, which can reduce the overall decoding times Decoding complexity, and can get better decoding performance.
  • the preset decoding termination condition is: reaching a preset maximum decoding level
  • the obtaining the final decoding output result according to all codewords obtained by decoding includes:
  • a codeword having the smallest Euclidean distance in the LLR sequence corresponding to the information to be decoded is selected from the codewords successfully decoded among all the codewords obtained as the final decoding output result.
  • the preset decoding termination condition is: obtaining the first legal codeword
  • the obtaining the final decoding output result according to all codewords obtained by decoding includes:
  • the first legal code word is used as the final decoding output result.
  • variable nodes with the same number of symbol inversions are selected from the variable nodes that are connected to the wrong check node.
  • the code word puncturing in standard code decoding is considered, so that good accuracy can be ensured even in the case of code word puncturing.
  • the present application provides a decoding device, including:
  • the first decoding module is used to decode the information to be decoded for the first time, and obtains the number of times of symbol inversion for each variable node in the preset set during the first decoding process, and the number of times of symbol inversion is all connected to the variable node The number of times the external information on the edge is flipped during decoding.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node;
  • a selection module used to select j variable nodes according to the order of the number of times of sign flipping when the first decoding fails, where j is a positive integer;
  • the processing module is used to set the log-likelihood ratio LLR values corresponding to the selected j variable nodes to preset positive and negative values, respectively, to generate 2 j LLR sequences;
  • the second decoding module is used to separately decode the 2 j LLR sequences to obtain a decoding output result.
  • variable node corresponding to the minimum number of symbol flip times among the j variable nodes includes an error in a variable node having the minimum number of symbol flip times in the preset set and the connected check node
  • the second decoding module is specifically used to:
  • the codeword with the smallest Euclidean distance of the LLR sequence corresponding to the information to be decoded from the codewords successfully decoded is selected as the decoding output result.
  • the present application provides a decoding device, including:
  • the first decoding module is used to decode the information to be decoded for the first time, and obtains the number of times of symbol inversion for each variable node in the preset set during the first decoding process, and the number of times of symbol inversion is all connected to the variable node The number of times the external information on the edge is flipped during decoding.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node;
  • the second decoding module is used for decoding according to the following decoding process when the first decoding fails:
  • the decoding is terminated, and the final decoding output result is obtained according to all codewords obtained by decoding.
  • the second decoding module is used to:
  • the next decoding is terminated for the codeword successfully decoded in the two codewords, and the codeword successfully decoded in the two codewords is stored.
  • the preset decoding termination condition is: reaching a preset maximum decoding level
  • the second decoding module is used to:
  • a codeword having the smallest Euclidean distance in the LLR sequence corresponding to the information to be decoded is selected from the codewords successfully decoded among all the codewords obtained as the final decoding output result.
  • the preset decoding termination condition is: obtaining the first legal codeword
  • the second decoding module is used to:
  • the first legal code word is used as the final decoding output result.
  • variable nodes with the same number of symbol inversions are selected from the variable nodes that are connected to the wrong check node.
  • the present application provides a network device, including: a memory and a processor;
  • the memory is used to store program instructions
  • the processor is configured to call program instructions in the memory to execute the decoding method in the first aspect and any possible design in the first aspect or in the second aspect and any possible design in the second aspect.
  • the present application provides a terminal device, including: a memory and a processor;
  • the memory is used to store program instructions
  • the processor is configured to call program instructions in the memory to execute the decoding method in the first aspect and any possible design in the first aspect or in the second aspect and any possible design in the second aspect.
  • the present application provides a readable storage medium in which a computer program is stored.
  • the decoding apparatus executes the first aspect and the first aspect
  • the decoding method in any possible design or in the second aspect and any possible design in the second aspect.
  • the present application provides a program product, the program product including a computer program, the computer program stored in a readable storage medium.
  • At least one processor of the decoding device can read the computer program from a readable storage medium, and the execution of the computer program by the at least one processor causes the decoding device to implement the first aspect and any possible design of the first aspect or the second The decoding method in any possible design of the aspect and the second aspect.
  • FIG. 1 is a schematic diagram of a system architecture including a sending end and a receiving end provided by this application;
  • Figure 3 is a schematic diagram of the update process of variable nodes and check nodes
  • FIG. 6 is a block diagram of a decoding structure corresponding to this embodiment.
  • 16 is a schematic structural diagram of an embodiment of a decoding device provided by this application.
  • 17 is a schematic structural diagram of an embodiment of a decoding device provided by this application.
  • FIG. 19 is a schematic structural diagram of a terminal device provided by this application.
  • the embodiments of the present application can be applied to a wireless communication system.
  • the wireless communication systems mentioned in the embodiments of the present application include but are not limited to: narrow-band Internet of Things (Narrow-Band-Internet of Things, NB-IoT), global mobile Communication System (Global System for Mobile Communications, GSM), Enhanced Data Rate GSM Evolution System (Enhanced Data for Rate GSM Evolution, EDGE), Wideband Code Division Multiple Access System (Wideband Code Division Multiple Access, WCDMA), Code Division Multiple Access 2000 system (Code Division Multiple Access, CDMA2000), time division synchronization code division multiple access system (Time Division-Synchronization Code Division Multiple Access, TD-SCDMA), long-term evolution system (Long Term Evolution, LTE) and next-generation 5G mobile communication system
  • the three major application scenarios are enhanced mobile broadband (Enhanced Mobile Broadband, eMBB), ultra-low latency ultra-high reliable communication (Ultra-reliable and low-latency communications (uRLLC), and large-scale machine communications (Massive Machine-Type Communications,
  • the embodiments of the present application can be applied to decoding of various LDPC codes.
  • the decoding of BG1 or BG2 LDPC codes in the eMMB application scenario in the NR system has better decoding performance when short codes (information length K ⁇ 500).
  • the communication device involved in this application may include a network device or a terminal device, or may be a chip applied to a network device or a terminal device.
  • terminal devices include but are not limited to mobile stations (MS, Mobile Station), mobile terminals (Mobile Terminal), mobile phones (Mobile Telephone), mobile phones (handset), and portable devices (portable equipment) ), Etc.
  • the terminal device can communicate with one or more core networks via a radio access network (RAN, Radio Access Network), for example, the terminal device can be a mobile phone (or called a "cellular" phone), with wireless communication
  • RAN Radio Access Network
  • the terminal device can be a mobile phone (or called a "cellular" phone), with wireless communication
  • the terminal device may also be a portable, pocket-sized, handheld, built-in computer, or vehicle-mounted mobile device or device.
  • the network device may be a device for communicating with the terminal device, for example, it may be a base station (Base Transceiver Station, BTS) in the GSM system or CDMA, or a base station (NodeB, NB) in the WCDMA system, or it may be Evolved base station (Evolutional Node B, eNB or eNodeB) in the LTE system, or the network device may be a relay station, an access point, an in-vehicle device, a wearable device, a network-side device in a future 5G network, or a public land in future evolution Network equipment in the Mobile Network (PLMN).
  • BTS Base Transceiver Station
  • NodeB NodeB
  • NB Evolved base station
  • PLMN Public Land in future evolution Network equipment
  • FIG. 1 is a schematic diagram of a system architecture including a sending end and a receiving end provided by the present application.
  • the sending end is an encoding side and can be used to treat
  • the transmitted information is encoded, and the encoded information is output.
  • the encoded information is modulated and transmitted on the channel to the decoding side;
  • the receiving end is the decoding side, which can be used to receive the signal and demodulate the signal to obtain the LLR corresponding to the encoded information Sequence, and decode the LLR sequence to get the information sent by the sender.
  • the terminal device When the network device is used as the sending end or encoding side, the terminal device can be used as the receiving end or decoding side, and conversely, when the terminal device is used as the sending end or encoding side, the network device can be used as the receiving end or decoding side.
  • This application provides a decoding method and device.
  • a feature based on the number of side information sign flips is used to determine whether a variable node is saturated, which improves the spatial range of variable node selection, thereby improving the variable node selection. Accuracy, which in turn improves decoding performance.
  • the selection of saturated variable nodes in the present application does not depend on the original channel information of the variable nodes, and can ensure good accuracy even in the case of code word puncturing.
  • the decoding method provided by the present application adopts the decoding termination condition based on the pruning algorithm, which effectively reduces the overall decoding times compared to the decoding termination condition using the complete list decoding in the ABP decoding method, Reduced decoding complexity.
  • the decoding method and device provided by the present application will be described in detail below with reference to the drawings.
  • the check matrix of the LDPC code is a sparse matrix
  • the code length is n
  • the information sequence length is k
  • the LDPC code It can be uniquely determined by its check matrix H, or can be uniquely defined by the Tanner graph corresponding to the check matrix H.
  • the following formula shows an example of an LDPC code check matrix and its corresponding check matrix H:
  • FIG. 2 is a Tanner graph corresponding to the check matrix H shown in this embodiment.
  • each circular node in FIG. 2 is a variable node, representing a column in the H matrix
  • each square node is a check node
  • each edge connecting the check node and the variable node in FIG. 2 represents that there is a non-zero element at the position where the row and column corresponding to these two nodes meet.
  • the LLR sequence is generated according to the selected variable node, and the method of decoding the LLR sequence can use the minimum sum (Minimum sum, MS) decoding or BP decoding, the following will briefly introduce the MS decoding method.
  • ⁇ j represents the LLR information of the jth element of the input sequence
  • R ij [k] represents the information passed from the i-th check node to the j-th variable node at the k-th iteration
  • Q ji [k] represents the information passed from the jth variable node to the ith check node at the kth iteration
  • Q j [k] represents the posterior probability information of the j-th variable node used for hard decision at the k-th iteration
  • C (j) represents the set of check nodes adjacent to the j-th variable node
  • V (i) represents the set of variable nodes adjacent to the i-th check node.
  • sgn () is the operation of taking symbols
  • min (.) Is the operation of finding the minimum value
  • V (i) ⁇ ⁇ j ⁇ represents the remaining variable nodes connected to the i-th check node except the j-th variable node Collection.
  • C (j) ⁇ ⁇ i ⁇ represents the set of check nodes other than the i-th check node connected to the j-th variable node.
  • Figure 3 is a schematic diagram of the update process of variable nodes and check nodes.
  • the left sub-picture in Figure 3 shows the update process of variable nodes, and the right sub-picture in Figure 3 shows the update process of check nodes.
  • the posterior probability information of the variable node needs to be settled, and the calculation formula is as follows:
  • the decoding result meets all the verification equations, the decoding is successful, the current iteration is terminated and the hard decision result is output
  • FIG. 4 is a flowchart of an embodiment of a decoding method provided by the present application. As shown in FIG. 4, this embodiment uses a receiving end (decoding side) as an execution subject for illustration. The method in this embodiment may include:
  • S101 Decode the information to be decoded for the first time, and obtain the number of symbol inversions of each variable node in the preset set during the first decoding process.
  • the number of symbol inversions is the external information on all edges connected to the variable node. The number of times the symbols are flipped during the decoding process.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • the information to be decoded is the acquired LLR sequence, which may be obtained by demodulating the received signal by the receiving end as an input.
  • the information to be decoded is decoded for the first time.
  • the first decoding can use existing decoding methods, such as MS decoding or BP decoding.
  • the number of symbol inversions of each variable node in the preset set is obtained.
  • the selection of the preset set in this embodiment may adopt the following four methods:
  • variable nodes corresponding to the rest of the punctured bits are not included.
  • the core matrix does not contain the variable nodes corresponding to the punctured bit part.
  • the number of symbol inversions is the number of symbol inversions of the external information on all edges connected to the variable node during the decoding process, where the number of symbol inversions is the change in the sign of the external information between two adjacent iterations frequency.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • variable nodes when the decoding fails for the first time, select j variable nodes according to the order of the number of sign flips, j can be selected according to the number of times to be decoded, for example, if 3 is selected, 2 3 or 8 decodes are required. For another example, if the number of times to be decoded is 16, j may be 4, that is, if the number of times to be decoded is n, then j is log 2 n rounded. Specifically, if a variable node has a high number of symbol jitters in the extrinsic information sent out on all edges connected to it, the variable node will be considered confused, that is, it cannot quickly converge to a symbol .
  • variable node should be preferentially selected for saturation and increase its channel information strength to help other adjacent check nodes and variable nodes in the variable node subgraph.
  • variable nodes with the same number of symbol inversions in the preset set there may be multiple variable nodes with the same symbol inversion times.
  • j variable nodes sorted from high to low according to the number of symbol flips if there are multiple variable nodes corresponding to the minimum number of symbol flips among the j variable nodes in the preset set, it may be satisfied that the minimum symbol is greater than or equal to the minimum symbol The number of variable nodes in the number of flips exceeds j.
  • variable nodes with the largest number of error check nodes among the connected check nodes from the variable nodes corresponding to the minimum number of symbol flip times, or the variable corresponding to the minimum number of symbol flip times Select the corresponding one or more variable nodes with the smallest absolute value of LLR.
  • the variable node corresponding to the minimum number of symbol flip times among the j variable nodes includes the variable check node in the preset set having the minimum number of symbol flip times, and the wrong check node among the connected check nodes
  • variable nodes in the order of the number of symbol flips the number of symbol flips of 9 variable nodes in the preset set is greater than or equal to 5, and the number of symbol flips of two variable nodes is 5
  • 7 variable nodes with the number of symbol inversion times greater than 5 from the two variable nodes with the same number of symbol inversion times, select the variable with the largest number of incorrect check nodes among the connected check nodes Node, or select one variable node with the smallest absolute value of LLR corresponding to the variable node.
  • variable nodes when 8 variable nodes are selected according to the order of the number of symbol flips, the number of symbol flips of 5 variable nodes in the preset set is greater than 6, and the number of symbol flips of 12 variable nodes is greater than or equal to 6, That is, 5 is the minimum number of sign flips of the 8 variable nodes pre-selected.
  • the LLR values corresponding to the variable nodes in the input LLR sequence are set to preset positive and negative values, respectively, to generate two LLR sequences, j There are 2 j combinations in each variable node, and 2 j LLR sequences can be generated.
  • the LLR values corresponding to the selected j variable nodes are respectively set as preset positive and negative values, which is to saturate the selected j variable nodes
  • the preset positive value may be 127
  • preset The negative value of can be -127.
  • S104 may be: decoding 2 j LLR sequences to obtain 2 j code words;
  • the codeword After successful verification of the codeword obtained by decoding, the codeword can be regarded as a codeword successfully decoded. Select the codeword with the minimum Euclidean distance from the LLR sequence corresponding to the information to be decoded from the successfully decoded codewords, as the decoding output result, that is, decode all 2 j LLR sequences Among the successful code words, the initial and final decoding output code words are selected.
  • the maximum decoding times M can also be preset
  • S104 can also be: decoding M MLR sequences among the 2 j LLR sequences according to the preset maximum decoding times M to obtain M code words,
  • the M is a positive integer
  • the codeword having the smallest Euclidean distance from the LLR sequence corresponding to the information to be decoded is selected from the codewords successfully decoded as the decoding output result.
  • the decoding method for decoding 2 j LLR sequences may use MS decoding or BP decoding.
  • the decoding method provided in this embodiment obtains the number of symbol flips of each variable node in the preset set during the first decoding process, and when the first decoding fails, selects j number in the order of the number of symbol flips Variable node, then set the log likelihood ratio LLR values corresponding to the selected j variable nodes to the preset positive and negative values respectively, generate 2 j LLR sequences, and finally translate the 2 j LLR sequences respectively Code to get the decoding output.
  • the selection is based on the number of variable flips of the variable nodes, which improves the spatial range of variable node selection, which can improve the accuracy of variable node selection, and thus improve the decoding performance.
  • the j variable nodes with the largest number of symbol flips are selected, and the j variable nodes that are saturated at a time are selected and then decoded, as shown in FIG. 5 below
  • the variable node that has been saturated is selected multiple times, that is, multi-level decoding is performed. The decoding process is described in detail below.
  • FIG. 5 is a flowchart of an embodiment of a decoding method provided by the present application. As shown in FIG. 5, this embodiment uses a receiving end (decoding side) as an execution subject for illustration. The method in this embodiment may include:
  • S201 Decode the information to be decoded for the first time, and obtain the number of times of symbol inversion for each variable node in the preset set during the first decoding process.
  • the number of times of symbol inversion is external information on all edges connected to the variable node. The number of times the symbols are flipped during the decoding process.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • the information to be decoded is the input LLR sequence, and the information to be decoded is decoded for the first time.
  • the first decoding can use an existing decoding method, such as MS decoding or BP decoding, obtained during the first decoding process The number of symbol inversions for each variable node in the preset set.
  • the selection of the preset set in this embodiment may adopt the following four methods:
  • variable nodes corresponding to the rest of the punctured bits are not included.
  • the core matrix does not contain the variable nodes corresponding to the punctured bit part.
  • the number of symbol inversions is the number of symbol inversions in the decoding process of external information on all sides connected to the variable node, where the number of symbol inversions is the number of symbol jitters.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • step S201 reference may also be made to step S101 in the foregoing method embodiment.
  • S2021 According to the number of symbol inversions of each variable node obtained by decoding at the previous level, select one variable node with the largest number of symbol inversions.
  • variable node with the largest number of symbol flips is selected, and one variable node is selected for saturation.
  • the variable node if there is a high number of symbol jitters in the external information sent by a variable node on all edges connected to it, the variable node will be considered confused, that is, it cannot quickly converge to a symbol.
  • the information reliability of other adjacent variable nodes of the check node adjacent to the variable node is low, so that the variable node cannot always accurately judge the information given by the adjacent check nodes during the iteration process. Therefore, the variable node should be preferentially selected for saturation and increase its channel information strength to help other adjacent check nodes and variable nodes in the variable node subgraph.
  • variable nodes with the largest number of symbol flips and the same number in the preset set select One variable node with the largest number of error check nodes among the connected check nodes, or the one with the smallest absolute value of LLR corresponding to the variable node is selected. That is, if the variable node with the largest number of symbol flips is one, the variable node is directly selected, and if there are multiple variable nodes with the largest number of symbol flips, one of the above rules is selected from multiple cases.
  • variable nodes in the preset set have the same number of symbol inversions
  • select the number of check nodes that are incorrect among the connected check nodes The most variable node, or the one with the smallest absolute value of LLR corresponding to the variable node. Therefore, good accuracy can be ensured even in the case of code word punching.
  • the LLR values corresponding to the selected one variable node in the input LLR sequence are respectively set to preset positive and negative values, that is, to saturate the selected one variable node, the preset positive
  • the value can be 127
  • the preset negative value can be -127, respectively generating 2 LLR sequences.
  • S2023 Decode the two LLR sequences separately to obtain two codewords, and update the number of times of symbol inversion for each codeword in each variable node in the preset set for the two codewords.
  • two LLR sequences are respectively decoded to obtain two codewords, which can be MS decoding or BP decoding.
  • codewords which can be MS decoding or BP decoding.
  • the number of times of symbol inversion of each variable node in the preset set for the codeword is updated, that is, in the process of separately decoding each LLR sequence, the preset set is obtained separately The number of sign flips for each variable node.
  • each codeword corresponds to an LLR sequence obtained in step S2022, that is, the LLR sequence corresponding to the codeword is used as an input for the next stage of decoding.
  • the first codeword of the two codewords is decoded according to the decoding process of S2021-S2025, for the two codes
  • the second codeword in the word is also decoded in the next stage according to the decoding process of S2021-S2025.
  • the next level of decoding is performed according to the decoding process, which may specifically be: decoding and decoding the failed codewords among the two codewords Successful codewords are decoded in the next stage according to the decoding process of S2021-S2025.
  • the next level of decoding is performed according to the decoding process, which may specifically be:
  • the decoding process proceeds to the next level of decoding; the next level of decoding is terminated for the codewords successfully decoded in the 2 codewords, and the successfully decoded codewords in the 2 codewords are stored.
  • this method is called a decoding termination condition based on a pruning algorithm. For a codeword that succeeds in decoding, the next level of decoding is terminated, and only a codeword that fails in decoding is decoded in the next level. The overall decoding times are reduced, the decoding complexity is reduced, and better decoding performance can be obtained.
  • FIG. 6 is a block diagram of a decoding structure corresponding to this embodiment.
  • S in FIG. 6 represents the set saturation threshold
  • j represents the current decoding level
  • the corresponding two codewords fail to decode Words are decoded in the next level; the codewords successfully decoded in the two codewords corresponding to each level of decoding are terminated in the next level of decoding, and the decoded codewords are stored.
  • the preset decoding termination condition may be: reaching a preset maximum decoding level.
  • the final decoding output result is obtained according to all the decoded codewords, which may be: select the LLR sequence corresponding to the information to be decoded from the codewords successfully decoded among all the decoded codewords The codeword with the smallest Euclidean distance is used as the final decoding output. It is understandable that if there is only one successfully decoded codeword among all the decoded codewords, the successfully decoded codeword is used as the final decoding output result.
  • the preset decoding termination condition may also be: obtaining the first legal codeword.
  • obtaining the final decoding output result according to all codewords obtained by decoding may be: using the first legal codeword as the final decoding output result.
  • the decoding method provided in this embodiment obtains the number of symbol flips of each variable node in the preset set during the first decoding process.
  • multi-level decoding is performed according to the following decoding process: According to the above The number of sign flips for each variable node obtained by the first-level decoding, select the one variable node with the largest number of sign flips, and set the LLR values corresponding to the selected one variable node to preset positive and negative values, respectively , Generate two LLR sequences, decode the two LLR sequences separately to obtain two codewords, and update the number of symbol flips for the codeword of each variable node in the preset set for the two codewords, in When the preset decoding termination condition is not met, the next level of decoding is performed according to the above decoding process for 2 codewords, and the decoding is terminated when the preset decoding termination condition is met, and finally based on all the codewords obtained by decoding Get the final decoding output.
  • FIG. 7 is a flowchart of an embodiment of a decoding method provided by the present application. As shown in FIG. 5, this embodiment uses a receiving end (decoding side) as an execution subject for illustration. In this embodiment, a pruning-based algorithm is used.
  • the decoding termination condition of the specific description of the decoding method, the method in this embodiment may include:
  • S301 Decode the information to be decoded for the first time, and obtain the number of symbol inversions of each variable node in the preset set during the first decoding process.
  • the number of symbol inversions is the external information on all edges connected to the variable node. The number of times the symbols are reversed in the process, the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • step S301 reference may be made to step S101 in the foregoing method embodiment.
  • S3021 According to the number of symbol inversions of each variable node acquired by decoding at the previous level, select one variable node with the largest number of symbol inversions.
  • variable nodes with the largest number of symbol flips and the same number in the preset set from the variable nodes with the same number of symbol flips, select One variable node with the largest number of error check nodes among the connected check nodes, or the one with the smallest absolute value of LLR corresponding to the variable node is selected. Therefore, good accuracy can be ensured even in the case of code word punching.
  • the LLR values corresponding to the selected one variable node in the input LLR sequence are respectively set to preset positive and negative values, that is, to saturate the selected one variable node, the preset positive
  • the value can be 127
  • the preset negative value can be -127, respectively generating 2 LLR sequences.
  • each codeword corresponds to an LLR sequence obtained in step S3022, that is, the LLR sequence corresponding to the codeword is used as an input for the next stage of decoding.
  • the decoding method provided in this embodiment by selecting saturated variable nodes in each stage of decoding process, selects according to the number of sign flips of the variable nodes, which increases the spatial range of variable node selection, thereby improving the variable nodes
  • the accuracy of the selection further improves the decoding performance.
  • the method of this embodiment can ensure good accuracy even in the case of code word punching.
  • the next level of decoding is terminated, and only the codewords that failed to be decoded are decoded in the next level, which can reduce the overall number of decodings and reduce the complexity of decoding, and Can get better decoding performance.
  • Schematic diagram of the rate curve, QML in FIGS. 8-11 is the decoding method shown in FIG. 7 of the embodiment of the present application, SPA in FIG. 8-11 is Sum-Product Algorithm (SPA), and SPA is a BP For the decoding algorithm, as can be seen from Figs.
  • SPA Sum-Product Algorithm
  • the transmission packet error rate 10-4 has a gain of about 0.5dB.
  • j max 6
  • Figure 14 is a schematic diagram of the transmission packet error rate curve under different decoding termination conditions.
  • the first method is the decoding in the embodiment of the present application Termination condition, that is, the preset maximum decoding level is reached, and the next level of decoding is terminated for the successfully decoded codewords, and only the failed codewords are decoded in the next level.
  • the condition is: the first legal codeword is obtained, and the termination condition of decoding in mode three is: the preset maximum decoding level is reached, and the codeword that fails to be decoded and the codeword that is successfully decoded are both performed to the next level Decode.
  • the device of this embodiment may include: a first decoding module 11, a selection module 12, a processing module 13, and a second translator Code module 14, wherein the first decoding module 11 is used to decode the information to be decoded for the first time, and obtain the number of symbol inversions of each variable node in the preset set during the first decoding process, the number of symbol inversions is The number of times the external information on all edges connected to the variable node is inverted during decoding.
  • the external information is the information transmitted from the variable node to the check node, or the external information is transmitted from the check node to the variable node Information;
  • the selection module 12 is used to select j variable nodes according to the order of the number of times of sign flipping when the first decoding fails, where j is a positive integer;
  • the processing module 13 is configured to set the log-likelihood ratio LLR values corresponding to the selected j variable nodes to preset positive and negative values, respectively, to generate 2 j LLR sequences;
  • the second decoding module 14 is used to separately decode the 2 j LLR sequences to obtain a decoding output result.
  • variable node corresponding to the minimum number of symbol flip times in the j variable nodes includes the variable check node in the preset set having the minimum number of symbol flip times, and the wrong check node among the connected check nodes One or more variable nodes with the largest number, or one or more variable nodes with the smallest absolute value of LLRs among the variable nodes with the minimum number of symbol inversions in the preset set.
  • the second decoding module is specifically used to:
  • the codeword with the smallest Euclidean distance of the LLR sequence corresponding to the information to be decoded from the codewords successfully decoded is selected as the decoding output result.
  • the device of this embodiment may be used to execute the technical solution of the method embodiment shown in FIG. 4, and its implementation principle is similar, and will not be repeated here.
  • the decoding device selects the variable node according to the number of times of sign flipping of the variable node when selecting the saturated variable node, thereby improving the spatial range of variable node selection, thereby improving the accuracy of variable node selection and further improving Decoding performance.
  • FIG. 17 is a schematic structural diagram of an embodiment of a decoding apparatus provided by the present application.
  • the apparatus of this embodiment may include: a first decoding module 21 and a second decoding module 22, where the first The decoding module 21 is used to decode the information to be decoded for the first time, and obtain the number of times of symbol inversion for each variable node in the preset set during the first decoding process, the number of times of symbol inversion being all edges connected to the variable node The number of times the sign of the external information in the decoding process is reversed.
  • the external information is the information transmitted from the variable node to the check node, or the external information is the information transmitted from the check node to the variable node.
  • the second decoding module 22 is used for decoding according to the following decoding process when the first decoding fails:
  • the decoding is terminated, and the final decoding output result is obtained according to all codewords obtained by decoding.
  • the second decoding module 22 is used to: decode the codewords in the two codewords that failed to be decoded according to the decoding process; decode the two codewords The successful codeword terminates the next stage of decoding and stores the successfully decoded codeword of the two codewords.
  • the preset decoding termination condition is: reaching a preset maximum decoding level; the second decoding module 22 is used to:
  • a codeword having the smallest Euclidean distance in the LLR sequence corresponding to the information to be decoded is selected from the codewords successfully decoded among all the codewords obtained as the final decoding output result.
  • the preset decoding termination condition is: obtaining the first legal codeword; the second decoding module 22 is used to: use the first legal codeword as the final decoding output result.
  • the second decoding module is used to select the connected check node from the variable nodes with the same number of symbol flips
  • the device of this embodiment may be used to execute the technical solution of the method embodiment shown in FIG. 5, and its implementation principle is similar, and details are not described herein again.
  • the decoding device provided in this embodiment selects saturated variable nodes during each stage of decoding according to the number of times of sign flipping of the variable nodes, thereby increasing the spatial range of variable node selection, thereby improving variable nodes The accuracy of the selection further improves the decoding performance.
  • the present application may divide the function modules of the sending device according to the above method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware or software function modules. It should be noted that the division of the modules in the embodiments of the present application is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 18 is a schematic structural diagram of a network device provided by the present application.
  • the network device 200 includes:
  • the memory 201 is used to store program instructions, and the memory 201 may be a flash (flash memory).
  • the processor 202 is configured to call and execute program instructions in the memory to implement each step in the decoding method of FIG. 4 or FIG. 5. For details, refer to the related description in the foregoing method embodiment.
  • the input / output interface 203 may also be included.
  • the input / output interface 203 may include an independent output interface and an input interface, or an integrated interface that integrates input and output.
  • the output interface is used to output data, and the input interface is used to obtain input data.
  • the output data is the generic name output in the foregoing method embodiment, and the input data is the generic name input in the foregoing method embodiment.
  • the network device 200 may be used to execute various steps and / or processes corresponding to the receiving end in the foregoing method embodiments.
  • FIG. 19 is a schematic structural diagram of a terminal device provided by the present application.
  • the terminal device 300 includes:
  • the memory 301 is used to store program instructions.
  • the memory 301 may be a flash (flash memory).
  • the processor 302 is configured to call and execute program instructions in the memory to implement each step in the decoding method of FIG. 4 or FIG. 5. For details, refer to the related description in the foregoing method embodiment.
  • the input / output interface 303 may also be included.
  • the input / output interface 303 may include an independent output interface and an input interface, or an integrated interface that integrates input and output.
  • the output interface is used to output data, and the input interface is used to obtain input data.
  • the output data is the generic name output in the foregoing method embodiment, and the input data is the generic name input in the foregoing method embodiment.
  • the terminal device 300 may be used to execute various steps and / or processes corresponding to the receiving end in the foregoing method embodiments.
  • the present application also provides a readable storage medium, including: a readable storage medium and a computer program, where the computer program is used to implement the decoding method in the foregoing method embodiments.
  • the present application also provides a program product, which includes a computer program, which is stored in a readable storage medium.
  • At least one processor of the decoding device can read the computer program from the readable storage medium, and the execution of the computer program by the at least one processor causes the decoding device to implement the decoding method in the above method embodiments.
  • the present application also provides a chip, the chip is connected to a memory, or a memory is integrated on the chip, and when the software program stored in the memory is executed, the decoding method in the above method embodiment is implemented.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmit to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)) or the like.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, Solid State Disk (SSD)

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Error Detection And Correction (AREA)

Abstract

La présente invention concerne un procédé et dispositif de décodage. Le procédé consiste à : effectuer un premier décodage sur des informations à décoder, et obtenir le nombre de retournements de symbole de chaque nœud variable dans un ensemble prédéfini pendant le premier processus de décodage, le nombre de retournements de symbole étant le nombre de retournements de symbole pendant le processus de décodage d'informations externes de tous les bords connectés au nœud variable, et les informations externes sont les informations transmises à un nœud de contrôle par le nœud variable, ou les informations externes sont les informations transmises au nœud variable par le nœud de contrôle (S101) ; si le premier décodage échoue, sélectionner j nœuds variables dans l'ordre décroissant du nombre de retournements de symbole, j étant un nombre entier positif (S102) ; régler les valeurs d'un rapport de vraisemblance logarithmique (LLR) correspondant aux j nœuds variables sélectionnés en tant que valeurs positives et négatives prédéfinies respectivement pour générer 2j séquences LLR (S103) ; et décoder les 2j séquences LLR respectivement pour obtenir un résultat de sortie de décodage (S104). Par conséquent, la précision de sélection de nœud variable peut être améliorée, et les performances de décodage sont ainsi améliorées.
PCT/CN2019/111512 2018-10-30 2019-10-16 Procédé et dispositif de décodage WO2020088256A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811279838.3 2018-10-30
CN201811279838.3A CN111130564B (zh) 2018-10-30 2018-10-30 译码方法及装置

Publications (1)

Publication Number Publication Date
WO2020088256A1 true WO2020088256A1 (fr) 2020-05-07

Family

ID=70463728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111512 WO2020088256A1 (fr) 2018-10-30 2019-10-16 Procédé et dispositif de décodage

Country Status (2)

Country Link
CN (1) CN111130564B (fr)
WO (1) WO2020088256A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785353A (zh) * 2022-03-24 2022-07-22 山东岱微电子有限公司 低密度奇偶校验码译码方法、系统、设备、装置及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987298B (zh) * 2023-03-20 2023-05-23 北京理工大学 基于BPL稀疏因子图选择的Polar码剪枝译码方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595424A (zh) * 2012-08-15 2014-02-19 重庆重邮信科通信技术有限公司 分量译码方法、译码器及Turbo译码方法、装置
CN104796159A (zh) * 2015-05-06 2015-07-22 电子科技大学 一种ldpc码加权比特翻转译码算法的混合提前停止迭代方法
CN105634506A (zh) * 2015-12-25 2016-06-01 重庆邮电大学 基于移位搜索算法的平方剩余码的软判决译码方法
CN106849954A (zh) * 2016-12-09 2017-06-13 西安电子科技大学 一种针对片上网络的低功耗、抗串扰的编解码方法及编解码装置
US9973212B2 (en) * 2015-09-08 2018-05-15 Storart Technology Co. Ltd. Decoding algorithm with enhanced parity check matrix and re-encoding scheme for LDPC code

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150550B (zh) * 2006-09-18 2012-02-01 国家广播电影电视总局广播科学研究院 交织低密度奇偶校验编码比特的方法、发射器和接收器
CN101132252B (zh) * 2007-09-26 2011-05-25 东南大学 低密度奇偶校验码的量化最小和译码方法
CN101436864B (zh) * 2007-11-12 2012-04-04 华为技术有限公司 一种低密度奇偶校验码的译码方法及装置
CN101355366B (zh) * 2008-06-13 2011-04-13 华为技术有限公司 低密度奇偶校验码的译码方法及装置
JP5591876B2 (ja) * 2012-06-22 2014-09-17 株式会社東芝 誤り訂正装置、誤り訂正方法およびプログラム
KR20150137430A (ko) * 2014-05-29 2015-12-09 삼성전자주식회사 통신 시스템에서 비-이진 ldpc 부호를 복호화하는 방법 및 장치
CN104218955B (zh) * 2014-09-28 2017-07-07 河南科技大学 基于比特翻转的ldpc码局部搜索译码方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595424A (zh) * 2012-08-15 2014-02-19 重庆重邮信科通信技术有限公司 分量译码方法、译码器及Turbo译码方法、装置
CN104796159A (zh) * 2015-05-06 2015-07-22 电子科技大学 一种ldpc码加权比特翻转译码算法的混合提前停止迭代方法
US9973212B2 (en) * 2015-09-08 2018-05-15 Storart Technology Co. Ltd. Decoding algorithm with enhanced parity check matrix and re-encoding scheme for LDPC code
CN105634506A (zh) * 2015-12-25 2016-06-01 重庆邮电大学 基于移位搜索算法的平方剩余码的软判决译码方法
CN106849954A (zh) * 2016-12-09 2017-06-13 西安电子科技大学 一种针对片上网络的低功耗、抗串扰的编解码方法及编解码装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785353A (zh) * 2022-03-24 2022-07-22 山东岱微电子有限公司 低密度奇偶校验码译码方法、系统、设备、装置及介质

Also Published As

Publication number Publication date
CN111130564B (zh) 2021-10-26
CN111130564A (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
US11251903B2 (en) Method and coding apparatus for processing information using a polar code
JP6817452B2 (ja) レートマッチング方法、符号化装置、および通信装置
US10567994B2 (en) Method and device for transmitting data
WO2013152605A1 (fr) Procédé de décodage et dispositif de décodage de code polaire
WO2016119105A1 (fr) Procédé et dispositif de génération de code polaire
CN108282259B (zh) 一种编码方法及装置
US11728829B2 (en) Error detection in communication systems using polar coded data transmission
US11239945B2 (en) Encoding method, decoding method, apparatus, and device
WO2020077596A1 (fr) Procédé et appareil de décodage pour codes ldpc
WO2018137568A1 (fr) Procédé de codage, dispositif de codage et dispositif de communication
US11323727B2 (en) Alteration of successive cancellation order in decoding of polar codes
WO2018196786A1 (fr) Appareil et procédé d'adaptation de débit pour codes polaires
WO2020048537A1 (fr) Procédé et dispositif de codage en cascade
WO2018027669A1 (fr) Adaptation de débit pour codeur de blocs
WO2019056941A1 (fr) Procédé et dispositif de décodage et décodeur
WO2020088256A1 (fr) Procédé et dispositif de décodage
WO2019206136A1 (fr) Procédé et dispositif d'adaptation de débit et de désadaptation de débit de code polaire
CN112953569B (zh) 译码方法及装置、存储介质、电子设备、译码器
WO2018127069A1 (fr) Procédé et dispositif de codage
CN111771336B (zh) 生成极化码的设备和方法
WO2016172937A1 (fr) Procédé et dispositif pour transmettre des données par utilisation de multiples codes polaires
WO2021073338A1 (fr) Procédé de décodage et décodeur
TWI783727B (zh) 使用極化碼之通訊系統及其解碼方法
CN113162633B (zh) 极化码的译码方法及装置、译码器、设备、存储介质
US11894859B1 (en) Methods and apparatus for decoding of polar codes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19879038

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19879038

Country of ref document: EP

Kind code of ref document: A1