CN109547032B - Confidence propagation LDPC decoding method based on deep learning - Google Patents

Confidence propagation LDPC decoding method based on deep learning Download PDF

Info

Publication number
CN109547032B
CN109547032B CN201811189094.6A CN201811189094A CN109547032B CN 109547032 B CN109547032 B CN 109547032B CN 201811189094 A CN201811189094 A CN 201811189094A CN 109547032 B CN109547032 B CN 109547032B
Authority
CN
China
Prior art keywords
deep learning
decoding
ldpc
model
decoding model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811189094.6A
Other languages
Chinese (zh)
Other versions
CN109547032A (en
Inventor
姜小波
汪智开
梁冠强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deyi Microelectronics Co.,Ltd.
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811189094.6A priority Critical patent/CN109547032B/en
Publication of CN109547032A publication Critical patent/CN109547032A/en
Application granted granted Critical
Publication of CN109547032B publication Critical patent/CN109547032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Abstract

The invention provides a confidence propagation LDPC decoding method based on deep learning, which comprises the following steps: firstly, establishing a training sample set of LDPC decoding; secondly, establishing a deep learning decoding model; thirdly, determining an input training set of the deep learning decoding model; fourthly, determining an activation function of a hidden layer in the deep learning decoding model; fifthly, training the deep learning decoding model by using the input training set in the third step and adopting a batch gradient descent training method; sixthly, verifying the trained deep learning decoding model, making hard decision on the verified output result, correspondingly adjusting the weight w, and determining the parameters of the deep learning decoding model; and seventhly, inputting the LDPC code to be decoded into the deep learning decoding model with the determined parameters obtained in the sixth step for decoding, and finishing LDPC decoding. The invention can decode in parallel, reduce the iterative times and complexity of decoding, and realize the reduction of the data of the transmitting end from the sequence containing noise and interference.

Description

Confidence propagation LDPC decoding method based on deep learning
Technical Field
The invention relates to the technical field of electronic communication, in particular to a confidence coefficient propagation LDPC decoding method based on deep learning.
Background
Low Density Parity Check Code (LDPC) is a linear block error correcting Code with Low decoding complexity and excellent performance. Based on early researches, the error rate can be very close to Shannon limit under the condition that the LDPC code length is long enough, and even when a certain code length is reached, the error correction capability which is closer to Shannon limit than that of a Turbo code is realized. Turbo codes have gained a dominance in channel coding schemes for third generation mobile communications. Therefore, the LDPC code has been widely applied to the fields of deep space communication, optical fiber communication, satellite digital video and audio broadcasting, and the like, and is adopted by various modern communication standards.
The decoding of the LDPC code can be regarded as a classification problem of large data from another perspective, and considering the structure of the LDPC code and the advantages of the Tanner graph, the information transfer manner of the Belief Propagation (Belief Propagation) decoding algorithm, and the connection between the back Propagation algorithm (back Propagation) in the iterative process of the check node and the variable node and the deep learning, a new approach for decoding the LDPC code can be found from the information transfer manner. The deep learning method is used for the confidence coefficient propagation decoding, so that the decoding calculation amount can be greatly reduced, the decoding iteration times and complexity are reduced, the parallelism is realized, and the decoding throughput rate is greatly improved.
The traditional LDPC maximum likelihood decoding method is very difficult to realize, the computation complexity is too large, and particularly when the code length is long, for (n, k) LDPC (n is the code word length, k is the information bit length), along with the increase of k, the code word type 2^ k shows the explosion exponent increase, so that the data set classification has great difficulty.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings in the prior art and provide a confidence propagation LDPC decoding method based on deep learning.
In order to achieve the purpose, the invention is realized by the following technical scheme: a confidence propagation LDPC decoding method based on deep learning is characterized in that: the method comprises the following steps:
firstly, establishing a training sample set of LDPC decoding and obtaining a check matrix H;
secondly, establishing a deep learning decoding model through a check matrix H and by adopting a confidence coefficient propagation algorithm;
thirdly, determining an input training set of the deep learning decoding model;
fourthly, determining an activation function of a hidden layer in the deep learning decoding model, and initializing a weight w and a bias b;
fifthly, training the deep learning decoding model by using the input training set in the third step and adopting a batch gradient descent training method, and obtaining an optimal weight w and an optimal bias b;
sixthly, verifying the trained deep learning decoding model, making hard judgment on the verified output result and correspondingly adjusting the weight w to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model;
and seventhly, inputting the LDPC code to be decoded into the deep learning decoding model with the determined parameters obtained in the sixth step for decoding, and returning the information bit of the LDPC code at the output end to finish the LDPC decoding.
In the first step, the establishing of the training sample set of the LDPC decoding refers to:
firstly, a production information bit sequence X is obtained by multiplying a transmission information sequence Y before entering a channel by an LDPC code generating matrix G:
X=Y*G
wherein, for any LDPC (n, k) code, n is the length of the code word, and k is the length of the information bit; (X, Y) is used as tagged data, Y represents a group of transmitted information sequences before entering a channel, the length of Y is k, X represents an information bit sequence obtained by multiplying Y and an LDPC generating matrix G, and the length of X is n;
secondly, the information bit sequence X is added with Gaussian white noise through BPSK, and the information bit sequence X' with the Gaussian noise is obtained after initialization to be used as a training sample set of LDPC decoding.
Adding Gaussian white noise into the information bit sequence X through BPSK, initializing to obtain an information bit sequence X 'with Gaussian noise, wherein the information bit sequence X' is used as a training sample set of LDPC decoding and refers to: setting the size range and the step length of the Gaussian white noise, and calculating the number c of the types of the noise according to the size range and the step length of the Gaussian white noise, wherein the information bit sequence X' with the Gaussian noise is a matrix of (n, c multiplied by a multiplied by num), and the matrix is used as a training sample set of the LDPC decoding; where a is the number of the codeword types, and num is the number of times each noise is randomly generated.
In the second step, the establishing of the deep learning decoding model through the check matrix H and the confidence propagation algorithm means: and setting the number of layers of the hidden layers and the iteration times of the confidence coefficient propagation algorithm, updating each neuron in each layer of hidden layers, which is related to '1' in the check matrix H, through the confidence coefficient propagation algorithm in order from the last layer of hidden layers to the first layer of hidden layers to obtain a deep learning decoding model, and using other neurons which are not updated as the bias of the deep learning decoding model.
According to the method, through the processes of transverse updating from variable nodes to check nodes and longitudinal updating from check nodes to variable nodes in a confidence coefficient propagation algorithm, single-step expansion is carried out, and according to the characteristics of the LDPC code, one check node is only related to the variable node associated with the check node, so that the connection mode of a network is non-full-link, and a deep learning decoding model is built.
In the third step, the determining the input training set of the deep learning coding model refers to: selecting an information bit sequence X '(n, c multiplied by a multiplied by num) of a relevant code word as an input training set (n', c multiplied by a multiplied by num) of a deep learning decoding model; wherein, the related code word means: in the deep learning decoding model obtained in the second step, the updated neurons of the hidden layers in the check matrix H are related to the code words at the positions of the '1's. And (4) reversely constructing the deep network, and finally selecting relevant codeword bits, wherein n' < ═ n.
In the fourth step, the activation function of the hidden layer in the deep learning coding model is
Figure BDA0001827013240000031
The activation function of the deep learning decoding model is combined with the decoding process of the confidence coefficient relay algorithm, and the confidence coefficient relay algorithm relates to a group of formulas in the node updating process
Figure BDA0001827013240000032
The function may be monotonous and continuous, and thus may be an activation function of the deep learning coding model.
In the sixth step, the verifying the trained deep learning decoding model, making hard decision on the verified output result and correspondingly adjusting the weight w to reduce the decoding error rate performance of the deep learning decoding model, and determining the parameters of the deep learning decoding model means: setting the value domain range of the sigmoid function, verifying the trained deep learning decoding model, performing hard decision on the verified output result by adopting the sigmoid function and correspondingly adjusting the weight w so as to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the confidence propagation LDPC decoding method based on deep learning can perform parallel decoding, reduce the decoding iteration times and complexity and realize the reduction of data of a transmitting end from a sequence containing noise and interference.
2. The confidence propagation LDPC decoding method based on deep learning well combines the characteristics of LDPC decoding, can realize better decoding performance on the basis of not training all code word types, adopts one bit of deep learning network construction for decoding each time, greatly reduces the network complexity, fully utilizes the LDPC decoding characteristics and minimizes the mutual influence among the code words.
Drawings
FIG. 1 is a flow chart of a deep learning-based belief propagation LDPC decoding method of the present invention;
FIG. 2 is a diagram illustrating a check matrix H of the LDPC code in the embodiment;
FIG. 3 is a network structure diagram of a decoding method in the embodiment;
FIG. 4 is a graph comparing the performance of the decoding method of the embodiment with that of the conventional decoding method.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Example one
The invention takes an 1/2 code rate LDPC code as an example, and in order to more clearly show the structure of the deep learning network of the invention, an (8, 4) LDPC code is taken as an example to be described below.
As shown in fig. 1 to 4, a deep learning based belief propagation LDPC decoding method first randomly generates a partial information sequence Y and an encoded information bit sequence X corresponding to Y, where the X sequence is initialized to X 'after adding gaussian white noise through BPSK, and then selects a part or all of related information code words from X' according to the network structure shown in fig. 3, and selects a certain code word in Y corresponding to the selected information code word to perform deep learning decoding model structure.
The method comprises the following steps:
firstly, establishing a training sample set of LDPC decoding and obtaining a check matrix H;
secondly, establishing a deep learning decoding model through a check matrix H and by adopting a confidence coefficient propagation algorithm;
thirdly, determining an input training set of the deep learning decoding model;
fourthly, determining an activation function of a hidden layer in the deep learning decoding model;
fifthly, training the deep learning decoding model by using the input training set in the third step and adopting a batch gradient descent training method, and obtaining an optimal weight w and an optimal bias b;
sixthly, verifying the trained deep learning decoding model, making hard judgment on the verified output result and correspondingly adjusting the weight w to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model;
and seventhly, inputting the LDPC code to be decoded into the deep learning decoding model with the determined parameters obtained in the sixth step for decoding, and returning the information bit of the LDPC code at the output end to finish the LDPC decoding.
The method comprises the following specific steps:
(1) establishing an LDPC decoding training sample set: multiple sets of labeled data are generated according to the selected LDPC code word length, wherein (8, 4) is taken as an example, when the code length is longer, the size of the selected data set is related to the code length, and the invention does not need to take all code word situations as a training set when the code length is longer. Gaussian white noise is added through BPSK, the noise is stepped by 0.5db between 0.5db and 6.0db, and each group is added into a plurality of groups. For example, the LDPC selected data set is 12 × 16 × 100 — 19200 training set, 12 represents 12 different kinds of noise, 16 represents 16 different kinds of codewords (where the codeword lengths are different and the selected codeword types and noise ranges are different in size), and 100 represents 100 times of each kind of noise randomly, so that the training sample set for LDPC decoding has a size of 8 × 19200.
(2) Establishing a deep learning coding model: as shown in fig. 2 and fig. 3, taking the (8, 4) LDPC code as an example, the structure of the decoding model for deep learning is closely related to the check matrix, and therefore, when decoding a codeword one bit at a time, it is necessary to consider which inputs in the data set are related to it, and the characteristics of decoding the LDPC code are fully utilized to implement deep learning decoding, and the connection mode is closely connected to the number and position of "1" in the check matrix, so that the advantages of the belief propagation algorithm are fully utilized, and in combination with the strong pattern recognition, data classification, and strong fitting capability of deep learning, the parameters in front of the transfer function can be adjusted to implement better decoding, reduce the number of iterations, and optimize the weight value of the transfer function compared with the BP belief propagation algorithm expanded by the same number of iterations. The specific operation is as follows: in the embodiment, 5 iterations of BP confidence propagation algorithm are adopted, the number of hidden layers is 10, a deep learning decoding model is established by taking a first bit code word as an example, neurons are sequentially connected from the last hidden layer to the first hidden layer, and each neuron in each hidden layer, which is related to '1' in a check matrix H, is updated through the confidence propagation algorithm. The hidden layer of each layer is constructed in the same way, other neurons are biased, a deep learning decoding model is obtained finally, the connection modes of other bit code words are similar, and the parallel processing can be carried out simultaneously.
(3) Determining an input training set of the deep learning coding model: as shown in fig. 2, the check matrix H takes the first codeword as an example to illustrate the judgment of the training set, and as can be seen from fig. 2, the first column (the first column represented by the first codeword) is related to a11 and a31, the first row in which a11 is located is related to a12 and a13, the third row in which a31 is located is related to a34 and a37, and the network is built in a reverse forward manner, where the above process includes a vertical direction (column) and a horizontal direction (row), which is equivalent to two hidden layers; then a12 is related to a42 from the longitudinal view, a13 has no related point from the longitudinal view and is filled with offset, a34 is related to a24 from the longitudinal view, and a37 has no related point from the longitudinal view and is filled with offset; and successively stepping to the first layer, wherein the 2 nd, 3 rd, 4 th and 7 th columns are related to the first bit code word in the embodiment, so that the 2 nd, 3 rd, 4 th and 7 th rows in the 8 x 19200 matrix are selected to construct a4 x 19200 training set to be put into the deep learning network model for training. Similarly to the selection process of other bits, and therefore, the number of layers of concealment is related, this example implements 10 layers of concealment, i.e. analogous to 5 iterations of the belief propagation algorithm.
(4) The activation function of the hidden layer in the deep learning coding model is
Figure BDA0001827013240000061
(5) Training of a deep learning coding model: and training the deep learning decoding model by using the input training set in the third step and adopting a batch gradient descent training method, and obtaining the optimal weight w and bias b.
(6) Setting a value range (0, 1) of the sigmoid function, verifying the trained deep learning decoding model, performing hard decision on the verified output result by adopting the sigmoid function and correspondingly adjusting the weight w so as to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model.
(7) And inputting the LDPC code to be decoded into the deep learning decoding model with the determined parameters obtained in the sixth step for decoding, and returning the information bit of the LDPC code at the output end to finish the LDPC decoding.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (6)

1. A confidence propagation LDPC decoding method based on deep learning is characterized in that: the method comprises the following steps:
firstly, establishing a training sample set of LDPC decoding and obtaining a check matrix H;
secondly, establishing a deep learning decoding model through a check matrix H and by adopting a confidence coefficient propagation algorithm;
thirdly, determining an input training set of the deep learning decoding model;
fourthly, determining an activation function of a hidden layer in the deep learning decoding model, and initializing a weight w and a bias b;
fifthly, training the deep learning decoding model by using the input training set in the third step and adopting a batch gradient descent training method, and obtaining an optimal weight w and an optimal bias b;
sixthly, verifying the trained deep learning decoding model, making hard judgment on the verified output result and correspondingly adjusting the weight w to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model;
seventhly, inputting the LDPC code to be decoded into the deep learning decoding model with the determined parameters obtained in the sixth step for decoding, and returning the information bit of the LDPC code at the output end to finish LDPC decoding;
in the first step, the establishing of the training sample set of the LDPC decoding refers to:
firstly, a production information bit sequence X is obtained by multiplying a transmission information sequence Y before entering a channel by an LDPC code generating matrix G:
X=Y*G
wherein, for any LDPC (n, k) code, n is the length of the code word, and k is the length of the information bit; (X, Y) is used as tagged data, Y represents a group of transmitted information sequences before entering a channel, the length of Y is k, X represents an information bit sequence obtained by multiplying Y and an LDPC generating matrix G, and the length of X is n;
secondly, the information bit sequence X is added with Gaussian white noise through BPSK, and the information bit sequence X' with the Gaussian noise is obtained after initialization to be used as a training sample set of LDPC decoding.
2. The deep learning based belief propagation LDPC decoding method of claim 1, characterized by: adding Gaussian white noise into the information bit sequence X through BPSK, initializing to obtain an information bit sequence X 'with Gaussian noise, wherein the information bit sequence X' is used as a training sample set of LDPC decoding and refers to: setting the size range and the step length of the Gaussian white noise, and calculating the number c of the types of the noise according to the size range and the step length of the Gaussian white noise, wherein the information bit sequence X' with the Gaussian noise is a matrix of (n, c multiplied by a multiplied by num), and the matrix is used as a training sample set of the LDPC decoding; where a is the number of the codeword types, and num is the number of times each noise is randomly generated.
3. The deep learning based belief propagation LDPC decoding method of claim 2, characterized in that: in the second step, the establishing of the deep learning decoding model through the check matrix H and the confidence propagation algorithm means: and setting the number of layers of the hidden layers and the iteration times of the confidence coefficient propagation algorithm, updating each neuron in each layer of hidden layers, which is related to '1' in the check matrix H, through the confidence coefficient propagation algorithm in order from the last layer of hidden layers to the first layer of hidden layers to obtain a deep learning decoding model, and using other neurons which are not updated as the bias of the deep learning decoding model.
4. The deep learning based belief propagation LDPC decoding method of claim 3, wherein: in the third step, the determining the input training set of the deep learning coding model refers to: selecting an information bit sequence X '(n, c multiplied by a multiplied by num) of a relevant code word as an input training set (n', c multiplied by a multiplied by num) of a deep learning decoding model; wherein, the related code word means: in the deep learning decoding model obtained in the second step, the code word at the position where the updated neuron of each hidden layer in the check matrix H is related to '1'; n < ═ n.
5. The deep learning based belief propagation LDPC decoding method of claim 1, characterized by: in the fourth step, the activation function of the hidden layer in the deep learning coding model is
Figure FDA0002437941980000021
6. The deep learning based belief propagation LDPC decoding method of claim 1, characterized by: in the sixth step, the verifying the trained deep learning decoding model, making hard decision on the verified output result and correspondingly adjusting the weight w to reduce the decoding error rate performance of the deep learning decoding model, and determining the parameters of the deep learning decoding model means: setting the value domain range of the sigmoid function, verifying the trained deep learning decoding model, performing hard decision on the verified output result by adopting the sigmoid function and correspondingly adjusting the weight w so as to reduce the decoding error rate performance of the deep learning decoding model and determine the parameters of the deep learning decoding model.
CN201811189094.6A 2018-10-12 2018-10-12 Confidence propagation LDPC decoding method based on deep learning Active CN109547032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811189094.6A CN109547032B (en) 2018-10-12 2018-10-12 Confidence propagation LDPC decoding method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811189094.6A CN109547032B (en) 2018-10-12 2018-10-12 Confidence propagation LDPC decoding method based on deep learning

Publications (2)

Publication Number Publication Date
CN109547032A CN109547032A (en) 2019-03-29
CN109547032B true CN109547032B (en) 2020-06-19

Family

ID=65843971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811189094.6A Active CN109547032B (en) 2018-10-12 2018-10-12 Confidence propagation LDPC decoding method based on deep learning

Country Status (1)

Country Link
CN (1) CN109547032B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109995B (en) * 2019-05-14 2021-12-17 中国矿业大学 Fully mechanized mining face multi-source heterogeneous data fusion method based on deep learning
CN110430013B (en) * 2019-07-15 2020-10-02 华中科技大学 RCM method based on deep learning
CN110730006B (en) * 2019-10-25 2023-06-16 华南理工大学 LDPC code error correction method and error correction module for MCU
CN110739977B (en) * 2019-10-30 2023-03-21 华南理工大学 BCH code decoding method based on deep learning
CN114448570B (en) * 2022-01-28 2024-02-13 厦门大学 Deep learning decoding method of distributed joint information source channel coding system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8286048B1 (en) * 2008-12-30 2012-10-09 Qualcomm Atheros, Inc. Dynamically scaled LLR for an LDPC decoder
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN106877883A (en) * 2017-02-16 2017-06-20 南京大学 A kind of LDPC interpretation methods and device based on limited Boltzmann machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8286048B1 (en) * 2008-12-30 2012-10-09 Qualcomm Atheros, Inc. Dynamically scaled LLR for an LDPC decoder
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN106877883A (en) * 2017-02-16 2017-06-20 南京大学 A kind of LDPC interpretation methods and device based on limited Boltzmann machine

Also Published As

Publication number Publication date
CN109547032A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109547032B (en) Confidence propagation LDPC decoding method based on deep learning
CN1132320C (en) Optimal soft-output decoder for tail-biting trellis codes
CN105763203B (en) Multi-element LDPC code decoding method based on hard reliability information
CN111565051B (en) Self-learning normalized bias minimum sum decoding method for LDPC code
CN109286405B (en) Low-complexity polarization code progressive bit flipping SC decoding method
CN109586730B (en) Polarization code BP decoding algorithm based on intelligent post-processing
CN109586732B (en) System and method for encoding and decoding LDPC codes with medium and short codes
CN110233628B (en) Self-adaptive belief propagation list decoding method for polarization code
CN107565978B (en) BP decoding method based on Tanner graph edge scheduling strategy
US8468438B2 (en) Method and apparatus for elementary updating a check node during decoding of a block encoded with a non-binary LDPC code
CN111106839A (en) Polarization code decoding method and device based on neural network
CN110995279A (en) Polarization code combined SCF spherical list overturning decoding method
CN107947802B (en) Method for coding and decoding rate compatible low density parity check code and coder
CN110730008B (en) RS code belief propagation decoding method based on deep learning
CN110739977B (en) BCH code decoding method based on deep learning
CN112165338A (en) Estimation method for interleaving relation of convolutional code random interleaving sequence
CN106130565B (en) Method for obtaining RC-LDPC convolutional code by RC-LDPC block code
CN113556135B (en) Polarization code belief propagation bit overturn decoding method based on frozen overturn list
CN114374397A (en) Method for three-dimensional Turbo product code decoding structure and optimizing iteration weight factor
CN111181570A (en) FPGA (field programmable Gate array) -based coding and decoding method and device
CN113872609B (en) Partial cyclic redundancy check-assisted adaptive belief propagation decoding method
CN109495114B (en) LDPC decoder construction method based on Markov Monte Carlo method
CN112968707B (en) Two-stage weighted bit-flipping decoding method of LDPC code
KR101267756B1 (en) Method for encoding and decoding rate-compatible irregular repeat multiple-state accumulate codes and apparatuses using the same
CN115642924B (en) Efficient QR-TPC decoding method and decoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211229

Address after: 518000 area a, 7th floor, building A1, Shenzhen digital technology park, 17 Gaoxin South 7th Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Deyi Microelectronics Co.,Ltd.

Address before: 510640 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee before: SOUTH CHINA University OF TECHNOLOGY