CN106877883A - A kind of LDPC interpretation methods and device based on limited Boltzmann machine - Google Patents

A kind of LDPC interpretation methods and device based on limited Boltzmann machine Download PDF

Info

Publication number
CN106877883A
CN106877883A CN201710083027.5A CN201710083027A CN106877883A CN 106877883 A CN106877883 A CN 106877883A CN 201710083027 A CN201710083027 A CN 201710083027A CN 106877883 A CN106877883 A CN 106877883A
Authority
CN
China
Prior art keywords
value
computing module
neuron
layer neuron
energy function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710083027.5A
Other languages
Chinese (zh)
Inventor
沙金
昌晶
陈中杰
葛航旗
刘镜伯
陈帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201710083027.5A priority Critical patent/CN106877883A/en
Publication of CN106877883A publication Critical patent/CN106877883A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present invention discloses a kind of LDPC interpretation methods and device based on limited Boltzmann machine, be combined for limited Boltzmann machine in traditional LDPC decoding algorithms and neutral net by the method, LDPC decodings process is modeled using mathematical description more ripe in limited Boltzmann machine, determine energy function, again by training minimum energy function, the code word after finally being decoded.The method and device can describe the limited Boltzmann machine that high dimensional nonlinear maps by introducing, and more subtly be iterated decoding, obtain than the more preferable effect of standard BP decoding algorithm optimal before.

Description

A kind of LDPC interpretation methods and device based on limited Boltzmann machine
Technical field
The present invention relates to technical field of electronic communication, particularly a kind of LDPC interpretation methods based on limited Boltzmann machine And device.
Background technology
Loe-density parity-check code (LDPC, Low Density Parity Check codes) is a kind of to approach shannon limit Modern coding techniques, due to its superior performance and the characteristics of be easy to Parallel Implementation, LDPC is by various modern communication standard Adopt.But at present in the practical applications such as mobile communication, SSD error correction, bit error rate of the LDPC based on traditional BP decoding technique etc. Performance is preferable not enough, it is necessary to new method and apparatus further reduce the bit error rate, to adapt to the demand of application.
LDPC interpretation methods and device based on limited Boltzmann machine are built upon the limited glass in neutral net category The graceful machine of Wurz is theoretic.Limited Boltzmann machine can cause that the value of aobvious layer neuron reaches Bohr of stabilization maximum likelihood Zi Man is distributed.The advantage of limited Boltzmann machine is that it is a kind of structure that can accurately describe high dimensional nonlinear mapping, And the value of aobvious layer neuron can be more subtly changed when being trained by gradient descent method, therefore, it is possible to than traditional BP Interpretation method has the lower bit error rate.
The content of the invention:
The present invention shows more excellent method to find than BP decoding algorithms best before, it is proposed that one kind is based on receiving The new LDPC interpretation methods and device of Boltzmann machine are limited, so as to further reduce the bit error rate.The present invention is by being limited Bohr Hereby the thought of graceful machine goes back original sender data as far as possible from the receiving sequence containing noise and interference, can be used for the communications field The correcting data error of receiver and SSD.
Technical scheme:A kind of LDPC interpretation methods based on limited Boltzmann machine, comprise the following steps:
(1) check matrix H is determined according to application demand, size is m × n, row weight is L, and row weight is K.
(2) Tanner figures are set up according to check matrix, determines the annexation of variable node and check-node.
(3) limited Boltzmann machine model is set up according to Tanner figures, the variable node in Tanner figures is used as aobvious layer god Through unit, check-node is used as hidden neuron.
(4) Boolean expression of XOR is input into according to KConstruct the output of hidden neuron Function.The Boolean expression of K input XORs is rewritten into the form of minterm sum, for the change of Boolean type in Boolean expression Amount Xi, with real-valued variable expression 1+xiReplace, for the variable of Boolean typeWith real-valued variable expression 1-xiReplace, wherein xiIt is the value of information bit.
(5) energy function of limited Boltzmann machine is constructed, energy function is:
E in formulajIt is j-th output of hidden neuron in (4), E is the energy of whole model.
(6) value of the information bit after the BPSK modulation that will be received is assigned to aobvious layer neuron.
(7) feedforward is calculated, and the value of aobvious layer neuron passes to hidden neuron, and the output function determined by (4) is calculated Hidden neuron ejValue and energy function E value.
(8) feedback is calculated, the energy function value according to obtained by current iteration, and aobvious layer neuron is changed by gradient descent method Value, specific method is as follows:
X in formulaiI-th value of information bit is represented, α represents learning rate, for regulating and controlling the step-length that each iteration declines, asked With the gradient summation for being the L hidden neuron offer participated in each information bit.
(9) value of aobvious layer neuron is updated, hard decision is carried out, it is small even the value of the aobvious layer neuron more than or equal to 0 is 1 In 0 aobvious layer neuron value be -1, the energy function brought into (5).It is successfully decoded if E=0;Otherwise repeat step (7) (8)。
(10) when maximum iteration of the iterations more than setting, no longer enter row decoding, directly export.
Based on the code translator of limited Boltzmann machine, including following part:
(1) control module, the execution sequence for controlling code translator.
(2) feedover computing module, for calculating (7) in the above method, including with hidden neuron number identical simultaneously Row output function computing module and energy function computing module.
(3) computing module is fed back, for calculating (8) in the above method, including parallel gradient computing module and corresponding Correction value computing module, the intermediate result of the shared output function computing module of gradient calculation module.
(4) show layer neuron assignment module, the correction value of computing module output and the aobvious layer god of last iteration will be fed back It is added through the value of unit, feeding feedforward computing module.
Advantage of the invention:
The present invention has used for reference powerful neural network algorithm generally acknowledged at present, using the thought of limited Boltzmann machine LDPC decodings process is modeled, information bit is solved indirectly using the mode for minimizing energy function.Due to being limited glass The boolean's table for mapping high dimensional nonlinear powerful expression ability and the utilization XOR for using here that the graceful machine of Wurz has in itself The method that hidden neuron output function is constructed up to formula, interpretation method proposed by the present invention is under nearly all state of signal-to-noise All show the performance more excellent than BP algorithm.The present invention can be used for the receiver of various communication systems and solid state hard disc data The application scenarios such as error correction.
Brief description of the drawings
Fig. 1 is the structure chart of the limited Boltzmann machine code translator that the present invention builds;
Fig. 2 is the flow chart of interpretation method proposed by the present invention;
Fig. 3 is that the bit error rate of method proposed by the present invention and BP algorithm in embodiment compares.
Specific embodiment
With reference to embodiment and accompanying drawing, the present invention is further described, but embodiments of the present invention are not limited to This.
The present invention with LDPC code that (100,200,3,6) code check is 1/2 as embodiment, to proposed by the present invention based on glass The LDPC interpretation methods of the graceful machine of Wurz and being described in detail for device.
The check matrix H of the LDPC code of (100,200,3,6) in embodiment is the matrix of the dimensions of 100x 200, and row are again 3, Row is again 6.The information bit for receiving is yi.Fig. 1 gives the structure chart of limited Boltzmann machine code translator.
The flow of decoding algorithm as shown in Fig. 2 enter row decoding the step of according to flow chart successively.
(1) check matrix H is determined, 100x200 dimensions, row weight is 3, row weight is 6.
(2) Tanner figures are drawn according to check matrix, determines the annexation of variable node and check-node.
(3) limited Boltzmann machine model is built according to tanner graph, the variable node in Tanner figures is used as aobvious layer nerve Unit, check-node is used as hidden neuron.
(4) according to the Boolean expression of 6 input XORsConstruction hidden layer god Through the output function of unit.The Boolean expression of 6 input XORs is rewritten into the form of minterm sum, 32 minterms are had. For the variable X of Boolean type in Boolean expression each mintermi, with real-valued variable expression 1+xiReplace, for Boolean type VariableWith real-valued variable expression 1-xiReplace, wherein xiIt is the value of information bit.
(5) energy function of limited Boltzmann machine is constructed, energy function is:
E in formulaiIt is j-th output of hidden neuron in (4), E is the energy of whole model.
(6) value of the information bit after the BPSK modulation that will be received is assigned to aobvious layer neuron, xi=yi
(7) feedforward is calculated, and the value of aobvious layer neuron passes to hidden neuron, and the output function determined by (4) is calculated Hidden neuron ejValue and energy function E value.
(8) feedback is calculated, the energy function value according to obtained by current iteration, and aobvious layer neuron is changed by gradient descent method Value, specific method is as follows:
X in formulaiI-th value of information bit is represented, α represents learning rate, 0.01 is taken here, for regulating and controlling under each iteration The step-length of drop, summation is the gradient summation provided 3 hidden neurons that each information bit is participated in.
(9) the value x of aobvious layer neuron is updatedi=xi+Δxi, hard decision is carried out, even the aobvious layer neuron more than or equal to 0 Value be 1, the value of the aobvious layer neuron less than 0 is -1, is brought into (5).It is successfully decoded if E=0;Otherwise repeat step (7) (8)。
(10) when maximum iteration of the iterations more than setting, no longer enter row decoding, directly export, it is maximum here Iterations takes 400.
LDPC code translators based on limited Boltzmann machine, including control module, feedforward computing module, feedback calculate mould Block, aobvious layer neuron assignment module, device under the regulation and control of control module, when each iteration starts by aobvious layer neuron assignment Information bit feeding feedforward computing module in module, the value and energy function value of the hidden neuron that feedforward is calculated are sent Enter to feed back the correction value that computing module calculates Grad and aobvious layer neuron, finally show layer neuron by current iteration information bit Initial value be added with correction value, obtain the information bit of next iteration;
Control module, the execution sequence for controlling code translator;
Feedforward computing module is used to for the value of aobvious layer neuron to pass to hidden neuron, by the hidden neuron for constructing Output function calculate hidden neuron ejValue and energy function E value, it is including parallel with hidden neuron number identical Output function computing module and energy function computing module;Aobvious layer neuron x of the output function computing module according to inputiValue Calculate hidden neuron ejValue, and export and give energy function computing module;Energy function computing module is according to output function meter The calculating of the output computation energy function E of module is calculated, if E=0, stops iteration signal to control module output, otherwise to anti- The value of feedback computing module output energy function E;
Energy function value of the feedback computing module according to obtained by current iteration, aobvious layer neuron is changed by gradient descent method Value, specific method is as follows:
X in formulaiI-th value of information bit is represented, α represents learning rate, for regulating and controlling the step-length that each iteration declines, asked With the gradient summation for being the L hidden neuron offer participated in each information bit;Including parallel gradient computing module and Corresponding correction value computing module.It is right that the value of the hidden neuron that gradient calculation module is exported according to feedforward computing module calculates its Should be in the gradient of each aobvious layer neuron, i.e.,Because Grad is identical with the intermediate result of output function computing module, because The intermediate result of the shared output function computing module of this gradient calculation module;It is defeated that correction value computing module receives gradient calculation module The Grad for going out, is weighted summation, multiplies the computings such as learning rate, to aobvious layer neuron assignment according to above-mentioned specific method packet Module output Δ xi
Aobvious layer neuron assignment module, will feed back the correction value of computing module output and the aobvious layer god of last iteration
It is added through the value of unit, feeding feedforward computing module.
When entering row decoding using this code translator, follow the steps below.
(1) initialize, the initial value of the information bit that will be received is input to aobvious layer neuron assignment module.
(2) by the value x of aobvious layer neuroniFeedforward computing module is input to, by the output function meter in feedforward computing module Calculate modular concurrent ground and calculate value e because of once neuronj, the e that energy function computing module is exported by output function computing modulejMeter Energy function E is calculated, if E=0, stopping iteration signal being sent to control module, now just the value of aobvious layer neuron is decoding Successfully it is worth;Otherwise, to feedback computing module output E and ej
(3) the gradient calculation module in feedback computing module passes through ejCalculated with the intermediate result in feedforward computing module Corresponding GradIncoming correction value computing module;Correction value computing module is calculated by Grad and hidden neuron value Go out the correction value Δ x of aobvious layer neuroni, it is input to aobvious layer neuron assignment module.
(4) layer neuron assignment module is shown by the initial value x of current iterationiWith correction value Δ xiIt is added, is changed next time The initial value in generation.
(5) when the iteration count in control module reaches 400, decoding is stopped, output now shows the value of layer neuron.
Interpretation method proposed by the present invention is mainly by powerful neural network algorithm and LDPC decoding generally acknowledged at present Be combined, using limited Boltzmann machine thought to LDPC decoding process be modeled, using the Boolean expression of XOR come Construct hidden neuron output function and solve information bit indirectly using the mode for minimizing energy function.Additionally, this hair It is bright also appropriate to have selected learning rate and end condition to improve the decoding efficiency of decoding algorithm.Fig. 3 gives proposition of the present invention Method and BP algorithm be that the bit error rate in 1/2 LDPC code compares in (100,200,3,6) code check, it can be seen that the present invention is carried The interpretation method for going out all shows the performance more excellent than BP algorithm under nearly all state of signal-to-noise.
Code translator proposed by the present invention is based on foregoing interpretation method and builds, including control module, feedforward computing module, Feedback computing module, aobvious layer neuron assignment module.Because the gradient calculation in feedback procedure is needed using big in calculating process Intermediate result in amount feedforward calculating, therefore the present apparatus have effectively achieved the shared of intermediate result, greatly reduce redundancy meter Calculate, reduce computational complexity.Additionally, the present apparatus has also carried out streamlined and parallelization treatment to handling process, greatly improve Calculating speed, reduces decoding latency.
Above-described embodiment is one embodiment of the present invention, but method proposed by the present invention should not be limited by the examples, Other it is any without departing from the modification done under essence of the invention and principle, replacement, combination, simplification should be equivalent displacement side Formula, is included in protection scope of the present invention.

Claims (3)

1. a kind of LDPC interpretation methods based on limited Boltzmann machine, it is characterised in that comprise the following steps:
(1) check matrix H is determined according to application demand, size is m × n, and row weight is L, and row weight is K;
(2) Tanner figures are set up in the position according to " 1 " in check matrix, determine the annexation of variable node and check-node;
(3) limited Boltzmann machine model is set up according to Tanner figures, the variable node in Tanner figures as aobvious layer neuron, Check-node is used as hidden neuron;
(4) the Boolean expression e=X of XOR is input into according to K1⊕X2⊕…⊕Xk, construct the output function of hidden neuron;By K The Boolean expression for being input into XOR is rewritten into the form of minterm sum, for the variable X of Boolean type in Boolean expressioni, use Real-valued variable expression 1+xiReplace, for the variable of Boolean typeWith real-valued variable expression 1-xiReplace, wherein xiIt is letter Cease the value of bit;
(5) energy function of limited Boltzmann machine is constructed, energy function is:E in formulajFor j-th in (4) The output of hidden neuron, E is the energy of whole model;
(6) value of the information bit after the BPSK modulation that will be received is assigned to aobvious layer neuron;
(7) feedforward is calculated, and the value of aobvious layer neuron passes to hidden neuron, and the output function determined by (4) calculates hidden layer Neuron ejValue and energy function E value;
(8) feedback is calculated, the energy function value according to obtained by current iteration, and aobvious layer neuron is changed by gradient descent method Value, specific method is as follows:
Δx i = α Σ j = 1 L e j ∂ e j ∂ x i
X in formulaiI-th value of information bit is represented, α represents learning rate, for regulating and controlling the step-length that each iteration declines, summation is The gradient summation provided the L hidden neuron that each information bit is participated in.
(9) value of aobvious layer neuron is updated, hard decision is carried out, even the value of the aobvious layer neuron more than or equal to 0 is 1, less than 0 The value of aobvious layer neuron is -1, the energy function brought into (5).It is successfully decoded if E=0;Otherwise repeat step (7) (8);
(10) when maximum iteration of the iterations more than setting, no longer enter row decoding, directly export.
2. LDPC interpretation methods based on limited Boltzmann machine according to claim 1, it is characterised in that:Using limited Boltzmann machine is modeled to LDPC decodings process, and continuously differentiable hidden neuron is constructed by the Boolean expression of XOR Output function, optimal information bit distribution is obtained by way of minimizing energy function;Minimizing energy function During use gradient descent method, can adaptively according to the far and near adjusting step apart from optimization aim.
3. a kind of LDPC code translators based on limited Boltzmann machine, it is characterised in that calculate mould including control module, feedforward Block, feedback computing module, aobvious layer neuron assignment module, device, will when each iteration starts under the regulation and control of control module Information bit feeding feedforward computing module in aobvious layer neuron assignment module, the value of the hidden neuron that feedforward is calculated Correction value with energy function value feeding feedback computing module calculates Grad and aobvious layer neuron, finally shows layer neuron and incite somebody to action this The initial value of secondary iterative information bit is added with correction value, obtains the information bit of next iteration;
The control module, the execution sequence for controlling code translator;
The feedforward computing module is used to for the value of aobvious layer neuron to pass to hidden neuron, by the hidden neuron for constructing Output function calculate hidden neuron ejValue and energy function E value, it is including parallel with hidden neuron number identical Output function computing module and energy function computing module;Aobvious layer neuron x of the output function computing module according to inputiValue Calculate hidden neuron ejValue, and export and give energy function computing module;Energy function computing module is according to output function meter The calculating of the output computation energy function E of module is calculated, if E=0, stops iteration signal to control module output, otherwise to anti- The value of feedback computing module output energy function E;
Energy function value of the feedback computing module according to obtained by current iteration, aobvious layer neuron is changed by gradient descent method Value, specific method is as follows:
Δx i = α Σ j = 1 L e j ∂ e j ∂ x i
X in formulaiI-th value of information bit is represented, α represents learning rate, for regulating and controlling the step-length that each iteration declines, summation is The gradient summation provided the L hidden neuron that each information bit is participated in;Including parallel gradient computing module and correspondence Correction value computing module.The value of the hidden neuron that gradient calculation module is exported according to feedforward computing module calculates it and corresponds to The gradient of each aobvious layer neuron, i.e.,Because Grad is identical with the intermediate result of output function computing module, therefore ladder The intermediate result of the shared output function computing module of degree computing module;Correction value computing module receives the output of gradient calculation module Grad, is weighted summation, multiplies the computings such as learning rate, to aobvious layer neuron assignment module according to above-mentioned specific method packet Output Δ xi
Aobvious layer neuron assignment module, will feed back the value of the correction value of computing module output and the aobvious layer neuron of last iteration It is added, feeding feedforward computing module.
CN201710083027.5A 2017-02-16 2017-02-16 A kind of LDPC interpretation methods and device based on limited Boltzmann machine Pending CN106877883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710083027.5A CN106877883A (en) 2017-02-16 2017-02-16 A kind of LDPC interpretation methods and device based on limited Boltzmann machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710083027.5A CN106877883A (en) 2017-02-16 2017-02-16 A kind of LDPC interpretation methods and device based on limited Boltzmann machine

Publications (1)

Publication Number Publication Date
CN106877883A true CN106877883A (en) 2017-06-20

Family

ID=59166266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710083027.5A Pending CN106877883A (en) 2017-02-16 2017-02-16 A kind of LDPC interpretation methods and device based on limited Boltzmann machine

Country Status (1)

Country Link
CN (1) CN106877883A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322288A (en) * 2017-11-08 2018-07-24 南京大学 Joint-detection decoding scheme based on NBLDPC codes and tree search method
CN109547032A (en) * 2018-10-12 2019-03-29 华南理工大学 A kind of confidence spread LDPC interpretation method based on deep learning
CN109995380A (en) * 2018-01-02 2019-07-09 华为技术有限公司 Interpretation method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101355406A (en) * 2008-09-18 2009-01-28 上海交通大学 Decoder for layered non-rule low density checkout code and method for processing decode
CN105049060A (en) * 2015-08-14 2015-11-11 航天恒星科技有限公司 Decoding method and device of low density parity code LDPC
CN105337700A (en) * 2015-11-19 2016-02-17 济南澳普通信技术有限公司 Visible light communication system based on power line carrier with code rate self-adapted to QC-LDPC coding way and operating method of visible light communication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101355406A (en) * 2008-09-18 2009-01-28 上海交通大学 Decoder for layered non-rule low density checkout code and method for processing decode
CN105049060A (en) * 2015-08-14 2015-11-11 航天恒星科技有限公司 Decoding method and device of low density parity code LDPC
CN105337700A (en) * 2015-11-19 2016-02-17 济南澳普通信技术有限公司 Visible light communication system based on power line carrier with code rate self-adapted to QC-LDPC coding way and operating method of visible light communication system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PABLO M. OLMOS: "Joint Nonlinear Channel Equalization and Soft LDPC Decoding With Gaussian Processes", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *
岳殿武: "《分组编码学》", 31 July 2007, 西安电子科技大学出版社 *
薛飞: "基于神经网络的LDPC译码算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322288A (en) * 2017-11-08 2018-07-24 南京大学 Joint-detection decoding scheme based on NBLDPC codes and tree search method
CN109995380A (en) * 2018-01-02 2019-07-09 华为技术有限公司 Interpretation method and equipment
WO2019134553A1 (en) * 2018-01-02 2019-07-11 华为技术有限公司 Method and device for decoding
CN109547032A (en) * 2018-10-12 2019-03-29 华南理工大学 A kind of confidence spread LDPC interpretation method based on deep learning
CN109547032B (en) * 2018-10-12 2020-06-19 华南理工大学 Confidence propagation LDPC decoding method based on deep learning

Similar Documents

Publication Publication Date Title
US7373581B2 (en) Device, program, and method for decoding LDPC codes
CN101453297B (en) Encoding method and apparatus for low density generation matrix code, and decoding method and apparatus
CN105024704B (en) A kind of row layering ldpc decoder implementation method of low complex degree
CN109075803B (en) Polar code encoding with puncturing, shortening and extension
CN106877883A (en) A kind of LDPC interpretation methods and device based on limited Boltzmann machine
KR20080033381A (en) Test matrix generating method, encoding method, decoding method, communication apparatus, communication system, encoder and decoder
CN106936444B (en) Set decoding method and set decoder
CN110932734B (en) Deep learning channel decoding method based on alternative direction multiplier method
WO2021204163A1 (en) Self-learning decoding method for protograph low density parity check code and related device thereof
CN114244375B (en) LDPC normalization minimum sum decoding method and device based on neural network
CN105763203A (en) Multi-element LDPC code decoding method based on hard reliability information
CN107124251A (en) A kind of polarization code encoding method based on arbitrary kernel
US20220068401A1 (en) Efficient read-threshold calculation method for parametric pv-level modeling
CN104393877A (en) Irregular LDPC code linear programming decoding method based on weighting
CN108092673A (en) A kind of BP iterative decoding method and system based on dynamic dispatching
CN110545162B (en) Multivariate LDPC decoding method and device based on code element reliability dominance degree node subset partition criterion
Liu et al. A deep learning assisted node-classified redundant decoding algorithm for BCH codes
CN102594366B (en) A kind of self adaptation for LDPC code can walk abreast dynamic asynchronous BP decoding method
CN110739977B (en) BCH code decoding method based on deep learning
CN106856406A (en) The update method and decoder of check-node in a kind of interpretation method
Yuan et al. A novel hard decision decoding scheme based on genetic algorithm and neural network
Payani et al. Decoding LDPC codes on binary erasure channels using deep recurrent neural-logic layers
EP3912094A1 (en) Training in communication systems
Berezkin et al. Models and methods for decoding of error-correcting codes based on a neural network
Drummond et al. Mapping back and forth between model predictive control and neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620