CN100589357C - LDPC code vector decode translator and method based on unit array and its circulation shift array - Google Patents
LDPC code vector decode translator and method based on unit array and its circulation shift array Download PDFInfo
- Publication number
- CN100589357C CN100589357C CN200510114589A CN200510114589A CN100589357C CN 100589357 C CN100589357 C CN 100589357C CN 200510114589 A CN200510114589 A CN 200510114589A CN 200510114589 A CN200510114589 A CN 200510114589A CN 100589357 C CN100589357 C CN 100589357C
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- vector
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 239000013598 vector Substances 0.000 title claims abstract description 681
- 238000000034 method Methods 0.000 title claims abstract description 85
- 239000011159 matrix material Substances 0.000 claims abstract description 339
- 230000008569 process Effects 0.000 claims abstract description 13
- 238000012546 transfer Methods 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 97
- 238000004364 calculation method Methods 0.000 claims description 82
- 238000012545 processing Methods 0.000 claims description 79
- 125000004122 cyclic group Chemical group 0.000 claims description 41
- 230000006870 function Effects 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 20
- 230000005540 biological transmission Effects 0.000 claims description 12
- 230000002457 bidirectional effect Effects 0.000 claims description 11
- 108010003272 Hyaluronate lyase Proteins 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 4
- 230000003139 buffering effect Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 125
- 238000003491 array Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 238000000638 solvent extraction Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000007792 addition Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000009795 derivation Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 230000007480 spreading Effects 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Landscapes
- Error Detection And Correction (AREA)
Abstract
This invention relates to a vector decode method of LDPC code and its device based on the unit array and circulation shift array, which first of all divides received data into an array R containing nbvectors, computes an initial value of the reliability vector array and transfer information vector array, utilizes the transfer information vector matrix got from k-1 secondary iteration, reliabilityvector array and an un-I element value hij of a basic matrix to compute the kth time iteration transfer information vector matrix and reliability vector array, then to hard decide if the decoding issuccessful, if not, it continues the iteration, and all the vectors are 1x z soft bit ones. The device includes a basic matrix process module, an initial value operation module, an iteration operationmodule, a hard-decision check module and a control module, all storage modules can store z soft bits, the minimum operation units are z soft bit vector operation and the computing unit of each modulereads and writes data directly from related storage modules.
Description
Technical Field
The present invention relates to a decoder for data transmission error correction in a digital communication system and a decoding method thereof, and particularly to a decoder for a structured low density parity check code (LDPC code) in an error correction technology in the field of digital communication and a decoding method thereof.
Background
All digital communication systems such as communications, radar, telemetry, storage systems and internal operations of digital computers, and data transmission between computers, etc. can be summarized as the model shown in fig. 1. The source coder is used for improving the transmission effectiveness, the channel coder is used for resisting various noises and interferences in the transmission process, and the system has the capability of automatically correcting errors by artificially adding redundant information, so that the reliability of digital transmission is ensured. Low density parity check codes are a class of linear block codes that can be defined by a very sparse parity check matrix or bipartite graph, originally discovered by Gallager and so called Gallager codes. After decades of silence, with the development of computer hardware and related theories, MacKay and Neal have rediscovered it and demonstrated that it has performance approaching the shannon limit. Recent studies have shown that low-density parity-check codes have the following characteristics: the method has the advantages of low decoding complexity, linear time coding, performance approaching to the Shannon limit, parallel decoding and performance superior to Turbo codes under long-code long-strip conditions.
The LDPC code is a linear block code based on a sparse check matrix, and low-complexity coding and decoding can be realized by using the sparsity of the check matrix, so that the LDPC code is practical. The aforementioned Gallager code is a regular LDPC code (regular LDPC pc), and Luby and Mitzenmacher et al have generalized the Gallager code to propose an irregular LDPC code (irregular LDPC pc). The LDPC code has many decoding algorithms, wherein an information transfer algorithm (message passing algorithm) or a Belief Propagation algorithm (BP algorithm) is a mainstream and basic algorithm of the LDPC code, and many algorithms are improvements based on the algorithm.
Information transfer decoding algorithm and probability domain BP algorithm:
the information transfer algorithm is a decoding algorithm working on the basis of graph theory, and is called as a Message paging algorithm because reliability information is transmitted back and forth between variable nodes and check nodes of a bipartite graph in the running process of the algorithm. When the channel output symbol set in the Message paging algorithm is the same as the symbol set of the Message sent in the decoding process, and both are real number sets R, i.e. continuity of Message paging is used, the Message mapping function is properly selected, and the algorithm will be equivalent to the well-known BP algorithm, i.e. Sum Product algorithm (Sum Product algorithm). Three commonly used specific algorithms in the BP algorithm are described as follows:
BP algorithm of probability domain
If the encoded check matrix is H, the variable node set participating in the mth check is recorded as n (m) ═ n: hmn1 }. Similarly, the check node set in which the nth variable node participates is denoted as m (n) ═ m: hmn1 }. In the algorithm, there are two alternately executed parts, the values q associated with non-zero elements in the check matrixmnAnd rmnAre updated one after the other in iterations of the algorithm. Number qmn xThe probability that the nth variable node of the transmitted code word takes the value of x (takes the value of "0" or "1") is known when the messages of all check nodes except the mth check node are known. Number rmn xMeans that when the value of the nth variable node of the code word which is known to be sent is x, other variable nodes meet probability distribution { qmn′: n' is the probability that the mth check node is satisfied when the mth check node belongs to N (m) \ n. If the bipartite graph corresponding to the matrix H has no loop, after a certain number of iterations, the algorithm gives the accurate posterior probability of each variable node value.
The probability domain BP algorithm comprises the following steps:
a) initializing variable qmn 0And q ismn 1: satisfies H corresponding to each of the matrices HmnElement (n, m) of 1, q of variable nodemn 0And q ismn 1Are respectively initialized to fn 0And fn 1。ynIs the output of the channel at time n, σ2Is the noise variance.
for n=0,...,N-1
for m∈M(n)
The code represents a flow of double loops, with an outer loop variable of n and an inner loop variable of m.
b) Check node update (parity node update): this step computes two probability measures for each check node m and correspondingly each variable node n ∈ n (m): first, when x n0, other variable node { xn′: n' ≠ n } obeys mutually independent probability distributions qmn′ 0,qmn′ 1When the point is pointed out, the probability r that the check node m meets is obtainedmn 0(ii) a Second, correspondingly, when xnWhen 1, the probability r that the check node m is satisfiedmn 1。
For any m and n, letWherein H (m, n) ═ 1.
for m=0,..,M-1
for n∈N(m)
In the formula, "\" diagonal line indicates that a certain element is excluded. N (m) \\ n denotes a set of n (m) excluding a column index n, which is a difference set.
c) Variable node update (v)ariable node information update): this step uses the calculated value rmn 0And rmn 1Update probability value qmn 0And q ismn 1。
for n=0,...,N-1
for m∈M(n)
Wherein alpha ismnAnd betamnTo normalize the coefficients so that The probability that all the child check nodes are satisfied when the value of the variable node is 0,and when the value of the variable node is 1, all the sub check nodes are satisfied.
After any iteration, the pseudo posterior probability q for a variable node n to take values of 0 and 1 can be calculated according to the following formulan 0And q isn 1:
for n=0,...,N-1
d) Decoding iteration termination detection:
for these pseudo posterior probabilities qn 0And q isn 1Hard decision making to generate trial decoding resultsBy usingJudging whether the decoding is successful, if so, ending the decoding and outputting a code word; otherwise, judging whether the iteration times is less than a preset maximum value, if so, repeating b) and c) to continue the iteration, and if the iteration times reaches the maximum value and the decoding is still unsuccessful, declaring the decoding to fail.
BP algorithm of logarithmic domain
If the BP algorithm is converted to be carried out on a logarithm domain, the times of multiplication operation can be greatly reduced, and the method is suitable for practical application. At this time, the decoded message is considered to be an estimate of the information bits in the codeword, and includes two parts, symbol (sign) and reliability (reliability):
-the sign of the message, indicating whether the estimate of the information bits transmitted in the channel is (-) or (+);
the absolute value of the message, namely the reliability, represents the reliability of the message on the information bit estimation;
③ 0 in the message set represents erasable symbol (erasure), which means that the probability of information bit being (+1) or (-1) is equal, and (+1) or (-1) corresponds to "1" and "0", respectively.
The following definitions are made in the BP algorithm for the log domain:
wherein: l ismnIndicating check node to variable node information (explicit information), Z, sent from check node m to variable node nmnRepresenting variable node to check node information, LLR, sent from variable node n to check node mnRepresenting the log-likelihood ratio of the nth codeword bit.
The BP algorithm of the logarithm domain comprises the following steps:
a) initialization:
for n=0,...,N-1
for m∈M(n)
b) check node update
for m=0,...,M-1
for n∈N(m)
c) Variable node update
for n=0,..,N-1
for m∈M(n)
The log-likelihood ratio of the codeword bits is:
for n=0,...,N-1
d) then log-likelihood ratio LLR (q) is applied to the code wordn) Hard decision making to generate trial decoding resultsBy usingJudging whether the decoding is successful or not, if so, ending the decoding, and outputting a code word; otherwise, judging whether the iteration times is less than a preset maximum value, if so, repeating b) and c) to continue the iteration, and if the iteration times reaches the maximum value, and if the decoding is still unsuccessful, declaring the decoding to fail.
K in the superscript (k) in the above formula is the number of decoding iterations, and y is the number of decoding iterations for a BIAWGN (Binary Input, White Gaussian Additive White Noise-Binary Gaussian Noise) channelnFor channel output, σ2Is the noise variance.
Simplified form of the log domain BP algorithm:
combining b) and c) to eliminate ZmnThis then results in an equivalent simplified form of the log domain BP algorithm as follows.
a) Initialization:
for n=0,…,N-1
for n=0,...,N-1
for m∈M(n)
b) node update
for m=0,...,M-1
for n∈N(m)
c) Calculating the log-likelihood ratio of the code word:
for n=0,...,N-1.
d) and (5) judging and detecting, and the content is the same as above.
When the information transfer algorithm (Message Passing algorithm) or the BP algorithm is adopted, the design difficulty of the decoder mainly occurs in the storage and access of the sparse check matrix. For any non-zero element of the sparse matrix, the index of the sparse matrix needs to be stored or the pointer of the sparse matrix needs to be stored, so that the storage capacity is large, and the application of the LDPC code is further hindered. This problem becomes more pronounced when different code lengths employ different sparse check matrices. In addition, for conventional decoding algorithms, the decoder needs to expand the base matrix into a large parity check matrix, and storing such a large matrix is a problem. If the hardware structure is designed by the parity check matrix, the number of connections is large, the topology is complex, and different hardware topologies need to be designed for different code lengths. These disadvantages seriously hinder the structured LDPC codes from being practically applied, and become the development bottleneck of such low density parity check codes.
In the ieee802.16e standard, LDPC codes are based on unit matrices and cyclic shift matrices, each LDPC code has a basic matrix, hereinafter referred to as an original basic matrix, and the basic matrices of the LDPC codes with different code lengths are only the result of correcting values of the basic matrix, hereinafter referred to as a corrected basic matrix. In this case, only a small basic matrix needs to be stored for an LDPC code of a certain code rate. LDPC codes of such structured parity check matrices will become mainstream designs. However, an effective decoding algorithm and decoder which can make full use of the characteristics of the LDPC code with such a structure are still lacking in the prior art.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an LDPC code vector decoding method based on a unit array and a cyclic shift array thereof, which can realize decoding without storing a parity check matrix of the LDPC code and expanding a basic matrix. The invention also provides a device for realizing the method.
In order to solve the above technical problems, the decoding method of the present invention uses the same flow and principle as the conventional low density parity check code, but there are variations in specific implementation and data structure, and the minimum operation unit in all operations of decoding is a vector with length z. Thereby reducing the operation of the m × n matrix to mb×nbThe matrix operation of (2) can complete decoding only by a basic matrix instead of a parity check code. The hardware topology of the decoder is also reduced from an mxn matrix to mb×nbThe number of hardware connections is greatly reduced. More importantly, when the code length is variable at a specific code rate, different LDPC codes can adopt decoders with the same topological structure.
Based on the conception, the invention provides an LDPC code vector decoding method based on a unit array and a cyclic shift array thereof, which adopts a check matrixUniquely corresponding to the base matrix The number of iterations is k, the spreading factor is z, Iset (j) is HbColumn j of the non-1 element row index set, Jset (i) is HbThe ith row of the non-1 element column index set, the method comprising the steps of:
(a) receiving data Y ═ Y input to decoder0,y1,…,yN-1]Is divided into nbGroup order receiving sequenceVector arrayElement R in (1)j=[yjz,yjz+1,…,y(j+1)z-1];
(b) Setting k to 0, obtaining an initial value of a reliability vector array (such as a vector array of a code word log-likelihood ratio or a posterior probability) according to a received sequence vector array R, and obtaining an initial value of a vector matrix of transmission information (information from a variable node to a vector node or information from the vector node to the variable node), wherein the vectors are vectors of 1 xz soft bits;
(c) the non-1 element value h of the transfer information vector matrix, the credibility vector array and the basic matrix obtained by utilizing the k-1 iterationij bPerforming updating operation to obtain a transfer information vector matrix and a reliability vector array after the kth iteration, wherein the minimum operation unit in all the operations is a vector of 1 xz soft bits;
(d) carrying out hard decision on the credibility vector array to obtain a hard decision vector arraySjIs a 1 xz row vector, then is based onCalculating to obtain parity check vector array
(e) Judging whether the vector array T is all 0, if so, successfully decoding, outputting a hard decision codeword, and ending; otherwise, let k equal to k +1, judge whether k is less than the maximum number of iterations again, if yes, return to step (c), otherwise, decode fail, finish.
Furthermore, the vector decoding method has the following characteristics: the method is a simplified form of log-domain vector decoding method, wherein:
in the step (b), the information vector matrix from the check node to the variable node is completed by using the receiving data vector array RSum-codeword log-likelihood ratio vector arrayThe operation of all the initial values of the non-zero vectors is completed by the following loop operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), formula σ2Is the variance of the noise;
the step (c) is further divided into the following steps:
(c1) check node to variable node information vector matrix U according to last iteration(k-1)Sum codeword log likelihood ratio vector array Q(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)And updating the nodes by all the non-zero vectors, wherein the step is completed by the following circular operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
wherein jset (i) \\ j indicates a set of jset (i) excluding a certain column index j;
(c2) vector data Q from initial log-likelihood ratios(0)And the information vector matrix U from the check node to the variable node of the iteration(k)CalculatingThe code word log likelihood ratio vector array Q of the iteration(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating
And in the step (d), the code word log likelihood ratio vector array Q is processed(k)And carrying out hard decision.
Furthermore, the vector decoding method has the following characteristics: the method is a logarithm domain vector decoding method in a general form, wherein:
in the step (b), the information vector matrix from the variable nodes to the check nodes is completed by utilizing the received data vector array RSum-codeword log-likelihood ratio vector arrayThe calculation of the initial values of all the non-zero vectors is completed by the following loop operations: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:σ2is the variance of the noise;
the step (c) is further divided into the following steps:
(c1) v according to last iteration(k-1)And R(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)And updating check nodes by all non-zero vectors, wherein the step is completed by the following cyclic operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
(c2) from initial log-likelihood ratio vector array Q(0)And the information vector matrix U from the check node to the variable node of the iteration(k)Calculating the information vector matrix V from the variable node to the check node of the iteration(k)And updating variable nodes by all non-zero vectors, and completing the following cyclic operation: outer loop j ═ 0.. multidot.nb-1, internal cycle i ═0,...,mb-1, the formula:
simultaneously calculating the code word log likelihood ratio array Q of the iteration(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating:
and in the step (d), the code word log likelihood ratio vector array Q is processed(k)And carrying out hard decision.
Furthermore, the vector decoding method has the following characteristics: the method is a probability domain vector decoding method, wherein:
in the step (b), the received data array R is used for calculating the information vector matrix from the variable nodes to the check nodes Sum vector matrixAnd an array of codeword probability vectorsAndthe initial values of all the non-zero vectors are completed by the following loop operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:
the step (c) is further divided into the following steps:
(c1) Δ Q from last iteration(k-1)Updating the check node to variable node information vector matrix R of the iteration0(k),R1(k)And updating check nodes by all non-zero vectors, and completing the updating by the following cyclic operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
(c2) according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating the information vector matrix Q from the variable node to the check node of the iteration0(k),Q1(k)And updating variable nodes by all non-zero vectors, and completing the following cyclic operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:
meanwhile, according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating a pseudo posterior probability vector array F with variable nodes n taking values of 0 and 10(k),F1(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating:
And in said step (d), is according to F0(k),F1(k)And obtaining a vector array S by hard judgment of the size of the vector.
Furthermore, the vector decoding method has the following characteristics: the operation on the vector comprises vector four-rule operation, vector cyclic shift and vector function operation, the vector four-rule operation is completed by four-rule operation of corresponding elements of two vectors, Pij′Multiplication by a vector by a cyclic right shift h of the vector elementsij bBit to complete, Pij′ -1Multiplication by a vector by a cyclic left shift h of the vector elementsij bBit, and the function operation of the vector is performed by performing a function on each element in the vector.
Furthermore, the vector decoding method has the following characteristics: the check node to variable node information vector and the variable node to check node information vector are represented by fixed points, each vector comprises z soft bits, and each soft bit fixed point is 6 binary bits.
Furthermore, the vector decoding method has the following characteristics: the check node update processing of iterative decoding is realized by adopting the standardized confidence propagation algorithm or one of the following approximate algorithms of the algorithm: BP-Based algorithm, APP-Based algorithm, uniform maximum confidence propagation algorithm, min-sum algorithm, and min-sum look-up table algorithm.
The LDPC code vector decoding device based on the unit array and the cyclic shift array thereof comprises a basic matrix processing module, an initial value operation module, an iteration operation module, a hard decision detection module and a control module, wherein:
the basic matrix processing module comprises a basic matrix storage unit which is provided with L storage blocks, and each storage block is used for storing the basic matrixIs not-1 element valueL is the number of non-1 elements in the base matrix,
the initial value operation module is used for receiving input data Y ═ Y0,y1,…,yN-1]And buffered at nbIn each memory block, calculating initial value of the confidence vector array, and storing in nbIn each storage block, obtaining an initial value of a transfer information vector matrix;
the iterative operation module is used for utilizing the transmission information vector matrix, the credibility vector array and the non-1 element value h of the basic matrix obtained by the last iterationij bPerforming updating operation to obtain a transmission information vector matrix and a credibility vector array after the iteration;
the hard decision detection module is used for carrying out hard decision on the credibility vector array obtained by iteration to obtain a hard decision vector arrayIs stored in nbIn a memory block, then according toCalculating and judging the obtained odd-even check vector arrayWhether it is all 0 s;
the control module is used for controlling other modules to complete initial value operation, iterative operation and hard decision detection, when the array T is all 0, a hard decision code word is output, decoding is successful, and the decoding is finished; when T is not all 0, judging whether the iteration times is less than the maximum iteration times, if so, continuing the next iteration, if the maximum iteration times is reached, failing to decode, and ending;
and all the storage blocks are storage blocks for storing z soft bits, the operations between each array and matrix elements are vector operations with the size of the z soft bits, the computing unit of each module directly reads and writes data from the corresponding storage block, the information data transmitted each time is always integral multiple of the z soft bits, wherein z is an expansion factor.
Further, the vector decoding device may further have the following characteristics: the size of the memory block is zmaxA soft bit, wherein zmaxIs the expansion factor corresponding to the low density parity check code with the maximum code length of the specific code rate.
Further, the vector decoding device may further have the following characteristics: and each computing unit and the corresponding storage block are fixedly connected by hardware to realize the addressing of data.
Further, the vector decoding device may further have the following characteristics: the basic matrix storage unit in the basic matrix processing module stores the element values of the original basic matrix; or, the basic matrix storage unit in the basic matrix processing module is a modified basic matrix storage unit, the processing module further includes an original basic matrix storage unit and a basic matrix modification unit, and the calculation unit of the iterative operation module is further connected to a corresponding storage block of the modified basic matrix storage unit to read data.
Further, the vector decoding device may further have the following characteristics: the initial value operation module includes:
a received codeword vector storage unit for buffering the received codeword sequence Y ═ Y0,y1,…,yN-1]To receive a sequence vector arrayIs stored in the form of nbIn memory blocks, each memory block stores a vector Rj=[yjz,yjz+1,…,y(j+1)z-1];
A vector initial value calculation unit for reading out the received sequence vector RjCalculating an initial log-likelihood ratio vector array σ2Is the variance of the noise;
an initial log-likelihood ratio vector storage unit comprising nbA storage block for storing n of the initial log-likelihood ratio vector arraybVector Qj。
Further, the vector decoding device may further have the following characteristics: the iterative operation module comprises a check node to variable node information vector storage unit, a node updating processing array, a bidirectional buffer network consisting of a read network and a write network and a code log-likelihood ratio calculation unit, wherein:
the check node to variable node information vector storage unit comprises L storage blocks, each storage block is used for storing L check node to variable node information vectors to be transmitted and output by the node updating processing array, and each check node to variable node information vector corresponds to a non-1 element in the basic matrix;
the node update processing array is formed by MbEach corresponding to a basic matrix MbThe row calculation units each including a plurality of L calculation subunits corresponding to all non-1 elements in the row of the basic matrix, respectively, and each calculation subunit from the check node to the variable node information vector storage unit and the code word log-likelihoodReading data from the corresponding storage block of the ratio vector storage unit, completing one-time node updating operation, and writing the updated check node-variable node information vector into the corresponding storage block in the check node-variable node information vector storage unit through a write network;
in the bidirectional buffer network, for a calculation subunit corresponding to a certain non "-1" element of the base matrix in the node update processing array, the read network connects a storage block corresponding to all other non "-1" elements except the element in a row of the base matrix where the element is located in a check node to variable node information vector storage unit, and connects a storage block corresponding to all other non "-1" elements except the element in a row of the base matrix where the element is located in a codeword log-likelihood ratio vector storage unit;
the code word log likelihood ratio calculating unit is composed of nbEach calculating subunit obtains a log-likelihood ratio vector initial value and a check node to variable node information vector after the iteration from an initial log-likelihood ratio vector storage unit and the corresponding storage block in a check node to variable node information vector storage unit, and calculates a code word log-likelihood ratio vector of the iteration.
Further, the vector decoding device may further have the following characteristics: the iterative operation module comprises a check node to variable node information vector storage unit, a variable node to check node information vector storage unit, a variable node processing array, a check node processing array, a bidirectional buffer network comprising a read network A, a write network A, a read network B and a write network B, and a code log-likelihood ratio calculation unit, wherein:
the check node to variable node information vector storage unit comprises L storage blocks, each storage block is used for storing a check node to variable node information vector, and each check node to variable node information vector corresponds to a non-1 element in the basic matrix;
the variable node to check node information vector storage unit comprises L storage blocks, each storage block is used for storing a variable node to check node information vector, and each variable node to check node information vector corresponds to a non-1 element in the basic matrix;
the variable node processing array is composed of NbEach computing sub-unit reads data from corresponding storage blocks of a check node to variable node information vector storage unit and an initial log-likelihood ratio vector storage unit through a read network B to complete updating operation of the variable nodes, and then writes the updated variable node to check node information vectors into corresponding storage blocks of the variable node to check node information vector storage unit through a write network B;
the check node processing array is composed of MbEach calculation subunit reads data from a corresponding storage block of the variable node to the check node information vector storage unit through a read network A, completes check node updating operation by combining values of elements corresponding to the calculation subunits in the basic matrix, and writes updated check node to variable node information vectors into corresponding storage blocks in the check node to variable node information vector storage unit through a write network A;
for a calculation subunit in the check node processing array corresponding to a certain non "-1" element of the basic matrix, the read network a connects a corresponding storage block of all other non-1 elements except the element in the row of the basic matrix where the element is located in the variable node to check node information vector storage unit, and the write network a connects a corresponding storage block of the element in the check node to variable node information vector storage unit;
for a calculation subunit corresponding to a certain non "-1" element of the base matrix in the variable node processing array, the read network B connects a corresponding storage block of all other non-1 elements except the element in the column of the base matrix where the element is located in the storage unit from the check node to the variable node information vector, also connects a corresponding storage block of the column of the base matrix where the element is located in the storage unit from the initial log-likelihood ratio vector, and the write network B connects a corresponding storage block of the element in the storage unit from the variable node to the check node information vector;
the code word log likelihood ratio calculating unit is composed of nbEach calculating subunit obtains a log-likelihood ratio vector initial value and a check node to variable node information vector after the iteration from an initial log-likelihood ratio vector storage unit and the corresponding storage block in a check node to variable node information vector storage unit, and calculates a code word log-likelihood ratio vector of the iteration.
Further, the vector decoding device may further have the following characteristics: the hard decision detection module comprises:
a codeword log-likelihood ratio vector storage unit comprising nbA storage block for storing n obtained from each iterationbIndividual codeword log-likelihood ratio vector Qj (k);
A hard decision detection unit for performing hard decision on the codeword log-likelihood ratio vector Q generated by decoding to obtain nbHard decision codeword vector of each parity check vector array T is judged, and whether the parity check vector array T is all 0 is judged;
a hard decision codeword vector storage unit comprising nbA storage block for storing n obtained by hard decisionbA hard decision codeword vector.
From the above, the present invention provides a vector BP decoding method and device for the specific code structure of the variable code length LDPC code, and compared with the conventional BP decoding method and device, the present invention has the following characteristics:
1) the parity check matrix H of the LDPC code does not need to be stored and accessed, and the storage of the address information of the nodes in the parity check matrix is avoided, so that the required storage capacity of the decoder is obviously reduced.
2) Converting bit-based mxn matrix operations to z-bit vector based Mb×nbThe matrix operation of (2) reduces the connection number of the nodes of the decoding array by z times, and has a simple topological structure.
3) For the LDPC code with a specific code rate, the LDPC code has a unified topological structure and a decoding processing flow aiming at different code lengths, and is more suitable for parallel implementation.
4) Decoding topology sum-only basis matrix HbCorrelation is independent of the spreading matrix H and no spreading is required.
Therefore, the vector BP decoding method provided by the invention has great significance for the LDPC code based on the unit array and the cyclic shift matrix thereof, and is bound to become the mainstream decoding method of the LDPC code.
Drawings
Fig. 1 is a block diagram of a digital communication system.
FIG. 2 is a flowchart of a first embodiment of the vector BP method of the present invention.
Fig. 3 is a hardware configuration diagram of the decoding apparatus according to the first embodiment of the present invention.
FIG. 4 is a diagram of the connection relationship between check node to variable node information vector storage blocks and two node processing arrays in the application example of the present invention.
FIG. 5 is a connection diagram of a variable node-check node information vector storage block and two node processing arrays in an application example of the present invention.
Fig. 6A and 6B are structural diagrams of a check node processing array corresponding to the first row of the basis matrix in an application example.
Fig. 7A and 7B are structural diagrams of variable node processing arrays of application examples corresponding to a first column of a base matrix.
Fig. 8 is a structure diagram of a sparse matrix storage for storing and accessing a parity check matrix of a conventional decoder.
Fig. 9 is a hardware configuration diagram of a second embodiment of the decoding apparatus of the present invention.
Detailed Description
The subject of the present invention is a low density parity check code based on a unitary matrix and its cyclic shift matrix, and these basic concepts will now be described first.
Definition of LDPC code and basic matrix based on unit matrix and its cyclic shift matrix
Any LDPC code having a specific code rate and code length has an m × n parity check matrix H by which an encoder and a decoder of the LDPC code can be determined, where n is a length of a codeword, m is a number of check bits, and k ═ m-n is a number of systematic bits.
The check matrix H of LDPC code based on unit matrix and cyclic shift matrix is many zxz block square matrixes P with same sizei,jAnd (3) forming. H is defined as follows:
these square arrays Pi,jIs a unit matrix or a cyclic shift matrix or a zero matrix of the unit matrix. H is formed by a size of mb×nbBase matrix H ofb(base matrix) extension, where n ═ z · nb,m=z·mbZ is called spreading factor and is a positive integer greater than 1, calculated from the code length divided by the number of columns of the base matrix, N. HbCan be divided into two parts, let Hb1Corresponding to the information bits, Hb2Corresponding to the check bit, there is
In H, the basic permutation matrix is defined as a matrix in which the unit matrix is cyclically shifted to the right by one, and each of the non-zero block matrixes is a different power of the basic permutation matrix of z × z, and the non-zero block matrixes are cyclic shift matrixes of the unit matrix (the right shift is assumed in the text). Thus, each block matrix can be uniquely identified by the power j, the power of the unit matrix is represented by 0, the power of the matrix circularly shifted to the right by 1 is represented by 1, and so on. The zero matrix is typically denoted by "-1". Each block matrix of H is replaced by its power to obtain an mb×nbPower matrix of (H)bDefine the HbThe base matrix is H.
For example, a matrix
Corresponding uniquely to the following parameters z and a 2 x 4 basis matrix Hb:
z is 3 and
the base matrix H can be formed by replacing the non-zero elements of the base matrix with a zxz unitary matrix and its cyclic shift matrix or zero matrixbIs expanded into a parity check matrix H.
The first embodiment of the decoding algorithm of the present invention:
the following will construct and derive a simplified logarithm domain vector BP method based on the existing simplified logarithm domain BP decoding method, and the specific derivation process is described as follows:
let a structured LDPC code have an mxn parity check matrix H, where M is the number of check bits, N is the number of codeword bits, and K-N-M is the number of information bits. H has the structure in the formula (1),i.e. H is formed by mb×nbEach of the z × z zero matrix, the unit matrix, and the cyclic shift matrix. H is only corresponding to one basic matrixhij bFor the elements in the basis matrix to be,and
defining a basic matrix row index set Iset (j) and a column index set Jset (i)
Non-1 element column index set of ith row of base matrix
Defining a receiving sequence vector array R:
1 XN soft receiving sequence Y ═ Y of input decoder0,y1,…,yN-1]Is divided into nbA 1 xz row vector, with z soft bits in a group, represented by vector array R,any element R in Rj=[yjz,yjz+1,…,y(j+1)z-1],
Defining a codeword log-likelihood ratio vector array Q:
setting the 1 XN code word log-likelihood ratio sequence LLR of the decoder to [ LLR0,LLR1,…,LLRN-1]Is divided into nbThe set of 1 xz row vectors, represented by vector array Q,each element Q of Qj=[LLRjz,LLRjz+1,…,LLR(j+1)z-1],So Qj(l)=LLRjz+1。
Wherein a and x are both arbitrary integers, and z is an expansion factor.
Define the non-zero element column index set N (m) of a certain row in H:
in parity-check matrix H, i and l are fixed, and the set of all non-zero elements in the iz + l th row can be derived as:
in H, the set of all non-zero column indices for the iz + l th row is:
accordingly, it can be derived that in H, the set of all non-zero element row indices for the zj + l th column is:
defining a check node to variable node information vector matrix U:
defining a check node to variable node information vector matrix Andeach element U in UijAre all 1 xz row vectors. u. ofijFor recording and H partitioning arrays PijZ check node to variable node information L from check node to variable node corresponding to positionmnValues and this recording is done in column order, using the above derived expression for M (zj + l), for any element uij(l) The method comprises the following steps:
in practice, the check node to variable node information matrix is identical to the base matrix in size, shape and non-zero element position.
Defining a check node to variable node information vector matrix W:
definition matrixAny element W of WijIs a 1 xz row vector, definesH middle displacement matrix PijAnd HbMedium nonzero element hij bIs corresponding to, PijuijRepresents a 1 xz vector uijCirculation right shift hij bBits, defining a zxz square matrix Pij -1Satisfy the requirement ofIz×zIs a unit array of size zxz, so Pij -1uijRepresents a 1 xz vector uijCirculation left shift hij bA bit. For any element w, according to the above definition and derived expression for N (iz + l)ij(l) The method comprises the following steps:
by analyzing the physical meaning, we can know wijAlso for recording and partitioning P in HijZ check node to variable node information L from check node to variable node corresponding to positionmnValues, but records are in row order.
Defining vector array Λ:
order toWherein each element of Λ is a 1 xz row vector, andvector ΛjIs to convert the vector QjCirculation left shift hij bThe result of the bit. The log-likelihood ratio sequence of the code words is recorded, but the recording is performed in the block array in the order of rows.
Based on the above definition, the check node update formula of the logarithm domain BP decoding algorithm is converted into a vector form.
According to
<math>
<mrow>
<msubsup>
<mi>L</mi>
<mi>mn</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msup>
<mrow>
<mn>2</mn>
<mi>tanh</mi>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<munder>
<mi>Π</mi>
<mrow>
<msup>
<mi>n</mi>
<mo>′</mo>
</msup>
<mo>∈</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
<mo>\</mo>
<mi>n</mi>
</mrow>
</munder>
<mi>tanh</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msubsup>
<mi>LLR</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>L</mi>
<msup>
<mi>mn</mi>
<mo>′</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
</mrow>
<mn>2</mn>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</math>
According to wijAnd ΛjDefinition of, having
Therefore, the method comprises the following steps:
<math>
<mrow>
<msubsup>
<mi>u</mi>
<mi>ij</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msub>
<mi>P</mi>
<mi>ij</mi>
</msub>
<mn>2</mn>
<mi>tan</mi>
<msup>
<mi>h</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<munder>
<mi>Π</mi>
<mrow>
<msup>
<mi>j</mi>
<mo>′</mo>
</msup>
<mo>∈</mo>
<mi>Jset</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>\</mo>
<mi>j</mi>
</mrow>
</munder>
<mi>tanh</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msubsup>
<mi>P</mi>
<mi>ij</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>P</mi>
<mrow>
<mi>i</mi>
<msup>
<mi>j</mi>
<mo>′</mo>
</msup>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mi>u</mi>
<mrow>
<mi>i</mi>
<msup>
<mi>j</mi>
<mo>′</mo>
</msup>
</mrow>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
</mrow>
<mn>2</mn>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</math>
according to the log-likelihood ratio of the calculated code word:
therefore it has the advantages of
<math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<munder>
<mi>Σ</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mo>∈</mo>
<mi>Iset</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<msubsup>
<mi>u</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mi>j</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</math>
Wherein: u. ofijIs LmnIn the form of 1 xz row vector records, QjIs LLRnIn the form of 1 xz row vector records, PijIs a block matrix of z × z in H. The algorithm takes 1 × z row vector as the minimum basic operation element in the analytical expressions (2) and (3).
Defining a hard decision vector array S:
comparing the log-likelihood ratio LLR (q)n) Obtaining 1 XN sequence through hard decisionIs divided into nbA set of z-bit 1 xz row vectors, represented by vector array S,each element S in SjIs a 1 xz row vector with:
defining a parity check vector array T:
defining a vector arrayTiIs a z x 1 column vector. Let T be H.ST. Then there isIf T is all 0, then H.ST=0。
The following describes the flow of the simplified form of the log-domain vector BP algorithm of this embodiment, and the above definitions are repeated:
defining a check node to variable node information vector matrixElement uijAre all 1 xz row vectors;
The flow is shown in fig. 2, and comprises the following steps:
step 110, converting the received data Y input to the decoder to Y0,y1,…,yN-1]Is divided into nbSet of elements R of the received sequence vector array Rj=[yjz,yjz+1,…,y(j+1)z-1],
Step 120, let k equal to 0, and complete the operation on all non-zero vector initial values in the check node to variable node information vector matrix U and the codeword log-likelihood ratio vector array Q by using the received sequence vector array R:
for j=0,...,nb-1
for i∈Iset(j)
iset (j) is HbColumn j non-1 element row index set, σ2Is the noise variance.
130, according to the information vector matrix U from the check node to the variable node of the last iteration(k-1)Sum codeword log likelihood ratio vector array Q(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)Updating nodes by all non-zero vectors;
for i=0,...,mb-1
for j∈Jset(i)
wherein Jset (i) is HbRow i is a non-0 element column index set. The updating formula of the formula (4) can be completed by two steps of absolute value calculation and sign calculation:
where, phi (x) — log (tanh (x/2)) -log (coth (x/2)), x is a real number greater than zero.
Step 140, based on the initial log-likelihood ratio vector data Q(0)And the information vector matrix U from the check node to the variable node of the iteration(k)Calculating the code word log likelihood ratio vector array Q of the iteration(k)All non-zero vectors in;
for j=0,...,nb-1
step 150, for Q(k)Performing hard decision to obtain a hard decision vector array S according toMeter
Computing parity check vector array T ═ HST;
Step 160, judging whether T is all 0, if yes, executing step 190, otherwise, executing the next step;
step 170, let K equal to K +1, and determine whether the iteration number "K" is less than some preset maximum value KmaxIf yes, returning to the step 130, otherwise, executing the next step;
step 180, declaring decoding failure and ending;
and 190, judging that the decoding is successful, outputting a hard decision codeword sequence, and ending.
It can be seen that the minimum operation unit of the algorithm of the present embodiment is a row vector of 1 × z, and the addition, subtraction, multiplication and division of the algorithm is the addition, subtraction, multiplication and division of the vector, PijMultiplication by a vector means a cyclic right shift h of this vectorij bBit, Pij -1Multiplication by a vector means a cyclic left shift h of this vectorij bThe f-function of a certain vector is the vector reconstructed by the f-functions of each element of the vector. Since all operations are based on vector operations, we refer to the algorithm of the present invention as vector BP algorithm.
In summary, the whole decoding method is based on mb×nbThe basic matrix is used for operation instead of M multiplied by N matrix operation, the basic matrix is very small, so that a very complicated sparse matrix data structure is avoided, and because the low-density parity check code based on the unit matrix with a specific code rate usually only has one basic matrix, the positions of non-zero elements of the basic matrix obtained by correction under different code lengths are the same, the same topological structure can be adopted, so that the complexity of hardware implementation is greatly reduced, and the number of connections in a decoder is greatly reduced.
Second embodiment of the decoding algorithm of the present invention
The present embodiment provides a corresponding vector decoding algorithm for a general form of log domain decoding method.
In this embodiment, the check node update and the variable node update are completed in two steps, and on the basis of the first embodiment, only one more storage Z needs to be definedmnThe variable node to check node information vector matrix V.
Defining a variable node to check node information vector matrix Andeach element V in VijAre all 1 xz row vectors. v. ofijIs used for recording and partitioning the block array P in HijZ number of Z corresponding to positionmnValues, and such records are in column order.
For vijAny element v ofij(l) The method comprises the following steps:
the following definitions are repeated: check matrix adopted by decoding methodBasis matrix The iteration number is k; for receiving sequence vector array R, check node to variable node information vector matrixCodeword log-likelihood ratio vector arrayHard decision vector arrayParity check vector arrayThe definitions of the above are the same as those of the first embodiment, and are not described herein again.
The flow of the general type log-domain vector decoding method comprises the following steps:
step A, receiving data Y input into the decoder is [ Y ═ Y0,y1,…,yN-1]Is divided into nbSet of elements R of the received sequence vector array Rj=[yjz,yjz+1,…,y(j+1)z-1],
Step B, setting k to be 0, and completing the calculation of all non-zero vector initial values in a variable node to check node information vector matrix V and a code log-likelihood ratio vector array Q by using a received data vector array R;
for j=0,...,nb-1
for i∈Iset(j)
wherein Iset (j) is HbColumn j non-1 element row index set, σ2Is the noise variance.
Step C, according to the V of the last iteration(k-1)And R(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)Updating check nodes by all non-zero vectors;
for i=0,...,mb-1
for j∈Jset(i)
wherein Jset (i) is HbRow i is a non-0 element column index set. Likewise, this step can also be accomplished by two steps of absolute and sign.
Step D, according to the initial log-likelihood ratio vector array Q(0)And the information vector matrix U from the check node to the variable node of the iteration(k)Calculating the information vector matrix V from the variable node to the check node of the iteration(k)Updating variable nodes by all non-zero vectors;
for j=0,...,nb-1
for i=0,...,mb-1
simultaneously calculating the code word log likelihood ratio array Q of the iteration(k)All non-zero vectors in;
for j=0,...,nb-1
the subsequent hard decision, the judgment, the processing and the like of the decoding result of the steps E to I are completely the same as the steps 150 to 190 of the first embodiment, and are not repeated here.
The Log function and tangent tanh function are used in the iterative decoding process, and in practice, the approximate algorithm of BP algorithm proposed by m.fossorier et al, that is, BP-Based algorithm and APP-Based algorithm, may be used in the Log-domain vector decoding algorithms of the two embodiments. And in the research report entitled "REDUCC COMPLEX EXITY ITERATIVE DECODING OF LOW DENSITY PARITYCHECK CODES BASED ON BELIFE PORPATION" at IEEEtrans. Commun. volume 47, page 673-680, 5.1999, chen and Fossoorier et al propose UMP-BP (Uniform Power-belief propagation) algorithm and Normalized-BP (Normalized belief propagation) algorithm to implement the approximate operation OF Log function and tangent tanh function. This may reduce the complexity of iterative decoding with less performance degradation. In addition, approximate algorithms such as the modified algorithm, the min-sum algorithm, and the min-sum look-up table algorithm for the normalized-BP algorithm, which are proposed in samsung corporation entitled "apparatus and method for decoding low density parity check codes in a communication system", may also be used. The core difference between the decoding method of the present invention and the traditional decoding method is that z bits are packaged into a vector, and the vector is used as a basic operation unit to realize the decoding.
Third embodiment of the decoding algorithm of the present invention
This embodiment refers to q of the normal probability domain against its BP algorithmn 0、qn 1、qmn 0、qmn 1、rmn 0And rmn 1Constructing and deducing the correspondingThe specific derivation process of the probability domain vector BP algorithm is described as follows:
Receiving element R of sequence vector array RjAre all 1 xz row vectors, Rj=[yjz,yjz+1,…,y(j+1)z-1]
parity check vector array elements
In addition, the following definitions of vector arrays and vector matrices are supplemented:
defining a codeword probability vector array F0And F1
Probability sequence (q) for setting 1 XN code word bit of decoder to 0n 0}1×nIs divided into nbSet of 1 xz row vectors, using F0It is shown that,here, F0Each element in (a) is a 1 xz row vector, as follows:
in the case of a non-volatile memory cell,
<math>
<mrow>
<msubsup>
<mi>F</mi>
<mi>j</mi>
<mn>0</mn>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>q</mi>
<mi>jz</mi>
<mn>0</mn>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>q</mi>
<mrow>
<mi>jz</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mrow>
<mo>.</mo>
<mn>0</mn>
</mrow>
</msubsup>
<mo>,</mo>
<mo>·</mo>
<mo>·</mo>
<mo>·</mo>
<msubsup>
<mi>q</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mn>0</mn>
</msubsup>
<mo>]</mo>
<mo>.</mo>
</mrow>
</math>
<math>
<mrow>
<mo>∀</mo>
<mi>j</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>·</mo>
<mo>·</mo>
<mo>·</mo>
<mo>,</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
wherein,
<math>
<mrow>
<mo>∀</mo>
<mi>l</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>·</mo>
<mo>·</mo>
<mo>·</mo>
<mo>,</mo>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
<mo>.</mo>
</mrow>
</math>
similarly, will { qn 1}1×nIs divided into nbSet of 1 xz row vectors, using F1It is shown that,
therefore, the first and second electrodes are formed on the substrate,
<math>
<mrow>
<mo>∀</mo>
<mi>j</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
defining a check node to variable node information vector matrix R0And R1
Defining a matrix AndR0each of the elements Rij 0Are all 1 xz row vectors. Rij 0Is used for recording and partitioning the block array P in HijZ r corresponding to positionmn 0Values, and such records are in column order.
For Rij 0Any element R ofij 0(l) The method comprises the following steps:
wherein,
<math>
<mrow>
<mo>∀</mo>
<mi>l</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
defining a matrix AndR1each of the elements Rij 1Are all 1 xz row vectors. Rij 1For recording and H partitioning arrays PijZ r corresponding to positionmn 1Values, and such records are in column order.
For Rij 1Any element R ofij 1(l) The method comprises the following steps:
wherein,
<math>
<mrow>
<mo>∀</mo>
<mi>l</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
defining a vector matrix Q0、Q1Δ Q and Δ R:
defining a variable node to check node information vector matrix AndQ0each of which is an element Qij 0Are all 1 xz row vectors. Qij 0Is used for recording and partitioning the block array P in HijZ q corresponding to positionmn 0Values, and such records are in column order.
For Qij 0Of (2) an arbitrary element Qij 0(l) The method comprises the following steps:
wherein,
<math>
<mrow>
<mo>∀</mo>
<mi>l</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
defining a variable node to check node information vector matrix AndQ1each of which is an element Qij 1Are all 1 xz row vectors. Qij 1Is used for recording and partitioning the block array P in HijZ q corresponding to positionmn 1Values, and such records are in column order.
For Qij 1Of (2) an arbitrary element Qij 1(l) The method comprises the following steps:
wherein,
<math>
<mrow>
<mo>∀</mo>
<mi>l</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mi>z</mi>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
</mrow>
</math>
According to the similar derivation of the logarithm domain algorithm, the following steps of the vector BP algorithm flow in the form of a probability domain can be obtained:
step one, receiving data Y input into a decoder is equal to [ Y ═ Y0,y1,…,yN-1]Is divided into nbArray of z-bit 1 xz row vectorsAny element R of Rj=[yjz,yjz+1,…,y(j+1)z-1];
Step two, calculating a variable node to check node information vector matrix by utilizing the received data array R Andand an array of codeword probability vectorsAndinitial values of all non-zero vectors;
for j=0,...,nb-1
for i∈Iset(j)
step three, according to the delta Q of the last iteration(k-1)Updating the check node to variable node information vector matrix R of the iteration0(k),R1(k)Updating check nodes by all non-zero vectors;
for i=0,...,mb-1
for j∈Jset(i)
step four, according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating the information vector matrix Q from the variable node to the check node of the iteration0(k),Updating variable nodes by all non-zero vectors;
for j=0,...,nb-1
for i∈Iset(j)
step five, according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating a pseudo posterior probability vector array F with variable nodes n taking values of 0 and 10(k),F1(k)All non-zero vectors in;
for j=0,...,nb-1
step six, according to F0(k),F1(k)Hard decision to obtain vector array S, based on T ═ HSTJudging whether T is all 0, if so, successfully decoding, outputting a hard decision codeword, and ending; otherwise, continuously judging whether the iteration times is less than a preset maximum value, if so, returning to the step three, otherwise, declaring the decoding failure and ending.
The minimum operation unit of the decoding method of this embodiment is also a vector of 1 × z, the rule of the vector algorithm is the same as that of the previous two embodiments, and since all operations are based on vector operations, the algorithm adopted by the decoding method of this embodiment is called probability domain vector BP algorithm.
In summary, the calculation principle and the processing flow of the vector decoding method of the present invention are the same as those of the conventional algorithm, except that the z-bit data is always encapsulated into a vector during the implementation, and the decoding implementation is always based on the size of mb×nbBasis matrix HbThe parity check matrix H is not required, and is of size mb×nbThe matrix operation of (2). The new matrix operation is always z bits (e.g. codeword vector, variable node to check node information vector therein)And check node to variable node information vectors, etc.) are basic operation units, and all possible operations include vector addition, subtraction, multiplication, division, vector shift operation, and function operation of vectors. The topology of a decoder designed according to the vector decoding method depends on a basic matrix, and has no direct relation with a parity check matrix.
First embodiment of the decoding device of the present invention
The LDPC code decoding method provided by the invention can design an excellent LDPC code decoder, the topological structure of which is only related to the basic matrix and is not related to the parity check matrix, so that the LDPC code decoding method is particularly suitable for the structured LDPC code with variable code length.
The parallel vector decoder for implementing the vector BP algorithm in this embodiment is designed for the logarithm domain vector BP algorithm in the general form in the second embodiment, and its hardware structure is as shown in fig. 3, and the parallel vector decoder is mainly composed of a control unit, an arithmetic processing unit, a storage unit, and a bidirectional buffer network unit, and its most important feature is that the minimum unit for transmission, storage, and calculation of all data is a vector with a size of z. That is, all the memory cells are made up of memory blocks that can store z soft bits, and each soft bit usually needs 6 bits for fixed-point description. The minimum operation unit is also a vector with the size of z soft bits, and information data transmitted each time through a read-write network is always integral multiple of the z soft bits.
The memory module comprises primitive basic matrix memory cells (H)bMEM), corrected basic matrix memory cell (H)bzMEM), a received codeword vector storage unit (IN _ MEM), an initial log-likelihood ratio vector storage unit, a hard-decision codeword vector storage unit (OUT _ MEM), a codeword log-likelihood ratio vector storage unit, a variable node to check node information vector storage unit (VNOD _ MEM), and a check node to variable node information vector storage unit (CNOD _ MEM). Wherein:
the original basic matrix storage unit comprises a plurality of storage blocks, wherein the storage blocks are respectively used for storing one non-1 element in an original basic matrix, one storage block corresponding to one non-1 element in the original basic matrix (or a modified basic matrix) is called a node, and each storage block occupies 8 bits.
The modified base matrix storage unit also comprises a plurality of storage blocks, and the storage blocks are respectively used for storing a non-1 element in the base matrix modified by the base matrix modification unit, and the element is used for participating in the check node updating operation. In the formula, PijOr Pij -1By a vector of length z, i.e. by right or left h of the vectorij bCyclic shift of (2). The cyclic shift operation in the check node processing array is correlated with the value of the elements of the base matrix. The correction algorithm may use modulo (mod), round (scale + floor), or round (scale + round), etc.
A received codeword vector storage unit for buffering the received codeword sequence and outputting to the vector initial value calculation unit, having nbA memory block, each memory block storing a row vector of size z.
An initial log-likelihood ratio vector storage unit for storing n calculated by the vector initial value calculation unitbAnd the initial log-likelihood ratio vector is used by the variable node processing unit and the code word log-likelihood ratio calculating unit.
A code word log likelihood ratio vector storage unit for storing n after each iteration output by the code word log likelihood ratio calculation unitbA vector of log-likelihood ratios for the individual codewords.
A hard decision codeword vector storage unit for storing n after each iteration obtained by the hard decision detection unitbA hard decision codeword vector.
The check node to variable node information vector storage unit comprises L storage blocks, each storage block is used for storing L check node to variable node information vectors which are output by the check node processing array and transmitted from the check nodes to the variable nodes, L is the number of non-1 elements in the basic matrix, and each check node to variable node information vector corresponds to one non-1 element in the basic matrix. The check node to variable node information vector is generally represented by fixed points, one check node to variable node information vector comprises z soft bits, each soft bit fixed point is 6 binary bits, 1 bit represents a symbol, and 5 bits represent an absolute value part.
The variable node to check node information vector storage unit comprises L storage blocks, each storage block is used for storing L variable node to check node information vectors which are output by the variable node processing array and transmitted from the variable nodes to the check nodes, and each variable node to check node information vector corresponds to one non-1 element in the basic matrix and is also represented by a fixed point.
The operation processing module comprises variable node processing arrays (VNUs), check node processing arrays (CNUs), a vector initial value calculating unit, a code word log-likelihood ratio calculating unit, a basic matrix correcting unit (Hb _ Fix) and a hard decision detecting unit (HDC). Wherein:
variable node processing array composed of NbEach variable node calculation unit VNU _ j. Each calculating unit consists of a plurality of calculating subunits corresponding to all non-1 elements in the corresponding column of the variable node in the basic matrix, each subunit reads data from the corresponding storage blocks of the check node to the variable node information vector storage unit and the initial log-likelihood ratio vector storage unit through a reading network B to complete the updating operation of the variable node (see formula (7)), and then writes the updated variable node to check node information vector into the corresponding storage block of the variable node to check node information vector storage unit through a writing network B.
Check node processing array by MbAnd each check node calculation unit CNU _ i. Each computing unit consists of a plurality of computing subunits corresponding to all non-1 elements in a row corresponding to the check node in the basic matrix, and each subunit reads information vectors from the variable nodes to the check nodes from corresponding storage blocks of the information vector storage unit of the variable nodes to the check nodes through a reading network AAnd combining the values of the elements corresponding to the calculation subunit in the basic matrix to complete the check node updating operation (see formula (6)), and then writing the updated check node to variable node information vector into the corresponding storage block in the check node to variable node information vector storage unit through the writing network A.
A vector initial value calculation unit for calculating n according to the received code word vector and the noise variancebAnd writing the initial log-likelihood ratio vectors into an initial log-likelihood ratio vector storage unit, and simultaneously calculating initial values of L variable node-to-check node information vectors and writing the initial values into a variable node-to-check node information vector storage unit.
And the basic matrix correction unit (Hb _ Fix) is used for correcting the basic matrix according to different code lengths and storing the corrected basic matrix into the corrected basic matrix storage unit.
The code word log likelihood ratio calculating unit is composed of nbEach calculating subunit obtains a log-likelihood ratio vector initial value and a check node to variable node information vector after the iteration from an initial log-likelihood ratio vector storage unit and the corresponding storage block in a check node to variable node information vector storage unit, and calculates a code word log-likelihood ratio vector of the iteration.
The hard decision detection unit (HDC) is used for carrying out hard decision on the code word log-likelihood ratio vector generated by decoding, and the obtained hard decision code word vector is used as a hard decision code word vector storage unit and judges whether the parity check vector array T is all 0 or not, if so, the decoding is successful.
The operations performed in the respective calculation units can be divided into inter-vector operations and intra-vector operations. The basic processing unit of the inter-vector operation is a vector; the vector internal operation refers to an operation inside each operation of each computing unit or subunit, and the basic processing unit of the operation is a bit, and is generally processing of z soft bits. The vector operation includes vector four-rule operation, vector cyclic shift and function operation of vector, etc. The four arithmetic operations of the vector can be performed by two 1 xzmaxThe four arithmetic operations of the corresponding elements of the register are completed, and the cyclic shift of the vector can be realized by 1 xzmaxThe cyclic shift of the register is completed, and the function operation of the vector can be completed by 1 xzmaxEach element in the register is functionally completed. Wherein z ismaxIs the expansion factor corresponding to the low density parity check code with the maximum code length of the specific code rate. By zmaxThe design as the size of the vector can be suitable for the decoding requirement under any code length without changing the topological structure of the decoder. The vector operation is easily implemented by hardware, and the specific operation logic should be determined according to the selected implementation method, for example, various logics can be adopted to implement the Log function and the tangent tanh function.
The bidirectional buffer network part comprises buffer networks A and B: the network A is divided into a read network A and a write network A, and the network B is divided into a read network B and a write network B. The read network A provides a read address when the check node processing array reads the variable node to check node information vector from the variable node to the check node information vector storage unit, the write network A provides a write address when the check node to variable node information vector is written into the check node to the variable node information vector storage unit by the check node processing array, the read network B provides a read address when the variable node processing array reads the check node to variable node information vector from the check node to the variable node information vector storage unit, and the write network B provides a write address when the variable node to check node information vector is written into the variable node to the check node information vector storage unit by the variable node processing array.
As is known from the foregoing, L storage blocks of the variable node to check node information vector storage unit, L storage blocks of the check node to variable node information storage unit, NbL computation subunits, M, in a variable node computation unitbThe L calculation subunits in each check node calculation unit respectively correspond to-non-1 elements in the basic matrix. According to the variable node and check node updating formula in the second embodiment of the decoding method of the invention, the decoding method can obtainThe following conclusions were drawn:
for a calculation subunit in the check node processing array corresponding to a certain non "-1" element of the base matrix, the read network a connects a storage block, corresponding to all other non-1 elements except the element in the row of the base matrix where the element is located, in the variable node to check node information vector storage unit. The writing network A connects the corresponding storage block of the element in the storage unit from the check node to the variable node information vector. In addition, the read network a connects the corresponding memory block of the element in the modified base matrix memory unit to the computing subunit.
For a calculation subunit in the variable node processing array corresponding to a certain non "-1" element of the base matrix, the read network B connects the corresponding storage block of all other non-1 elements except the element in the column of the base matrix where the element is located in the storage unit from the check node to the variable node information vector, and also connects the corresponding storage block of the column of the base matrix where the element is located in the storage unit from the initial log-likelihood ratio vector. And the writing network B connects the corresponding storage block of the element in the variable node to check node information vector storage unit with the element.
In addition, L storage blocks from the check node to the variable node information vector storage unit and n of the codeword log likelihood ratio calculation unitbThe following addressing relationships also exist between the individual computation subunits: and the code word log-likelihood ratio calculation subunit of a certain column of the basic matrix is connected with the storage blocks from the check nodes corresponding to all non-1 elements in the column to the variable node information vector storage unit.
It can be seen that the above correspondence is only related to the position of the non-1 elements in the basis matrix and is very simple. The buffer network establishes the connection relationship between the operation unit and the storage unit, and can establish fixed connection by using hardware and also can establish variable addressing. For the addressing relation in the figure, in this embodiment, each computation subunit and the corresponding storage block are directly connected by a programmable array such as an FPGA and the like according to the corresponding relation to generate a read-write network, and certainly, the addressing relation can also be realized by programming in a DPS, that is, the system establishes the addressing relation between the storage block and the computation subunit according to the position of a non-1 element in a basic matrix when in work, and at this time, because the related storage blocks and the corresponding vectors are few, the system can directly access the storage blocks and the corresponding vectors, and therefore, the storage of indexes indicating the storage blocks or pointers pointing to the storage blocks is not required. The above addressing relationships will also be more intuitively explained in the application examples below.
The conventional decoding algorithm is based on parity check matrix, which requires a sparse matrix storage structure as shown in fig. 8, which is a two-dimensional doubly linked list. In addition to storing the soft bit information required for decoding, address information of the access node, i.e. address pointers pointing to the upper, lower, left and right nodes, needs to be stored. These address information are typically 32 bits. Therefore, for each node, not only 2 soft bit decoded data but also 4 address pointers need to be stored. The present invention avoids storing the 4 address pointers, so that the memory space is saved by at least 2/3.
The control module (control unit) is mainly used for controlling and coordinating each unit to complete the following decoding process:
first, initialization
When the data is ready at the input, the decoder reads in the soft bits of the codeword (i.e., the received codeword sequence) from the I/O port and stores it in the received codeword vector storage unit every clock cycle. After the whole block of data is stored, the vector initial value calculation unit calculates the initial value of the log-likelihood ratio vector according to the read-in received code word vector and writes the initial value into the initial log-likelihood ratio vector storage unit, and also calculates the initial value from the variable node to the check node information vector and writes the initial value into the variable node to the check node information vector storage unit.
The second step, iterative decoding, is realized by the following two substeps:
in the first sub-step, the check node processing array of the decoder performs calculation of check node update to complete iterative decoding in the horizontal direction. In each clock cycle, reading out a variable node to check node information vector from each storage block of the variable node to check node information vector storage unit, sending the variable node to check node information vector to a corresponding check node sub-calculation unit to complete check node updating operation, and writing the obtained check node to variable node information vector into the corresponding storage block of the check node to variable node information vector storage unit.
In the second sub-step, the variable node processing array of the decoder performs calculation of variable node update to complete iterative decoding in the vertical direction. In each clock cycle, reading out an initial log-likelihood ratio vector or a variable node to check node information vector from each storage block of the initial log-likelihood ratio vector storage unit and the check node to variable node information vector storage unit, sending the initial log-likelihood ratio vector or the variable node to check node information vector to a corresponding variable node calculation subunit to complete variable node updating operation, and writing the obtained variable node to check node information vector into a corresponding storage block of the variable node to check node information vector storage unit;
and simultaneously, in each clock cycle, reading an initial log-likelihood ratio vector and a check node to variable node information vector from each storage block of the initial log-likelihood ratio vector storage unit and the check node to variable node information vector storage unit, sending the initial log-likelihood ratio vector and the check node to variable node information vector to a corresponding code word log-likelihood ratio calculation subunit, calculating the code word log-likelihood ratio vector of the iteration, and writing the code word log-likelihood ratio vector into a corresponding storage block of the code word log-likelihood ratio vector storage unit.
Third, decoding detection and output
A hard decision detection unit (HDC) performs hard decision on the stored codeword log-likelihood ratio vector, stores the obtained hard decision codeword sequence and detects a hard decision result, if the result is correct, decoding is finished, and the hard decision codeword sequence is output; if the iteration number is wrong, whether the maximum iteration number is reached is judged, if the iteration number is reached, the decoding fails, and the process is ended, otherwise, the process returns to the second step to continue the iterative decoding.
The decoder of the present embodiment using the log-domain vector BP algorithm will be described with a relatively simple application example.
Then, the whole structure of the decoder is shown in FIG. 3, and the number of variable nodes nbNumber of check nodes m 4bThe basic unit of all information storage and operation in this application example is a vector of 1 x 2 soft bits, assuming that the spreading factor z is 2, there are 7 non "-1" elements in the base matrix.
Correspondingly, the number of the variable node calculating units is also 4, each variable node calculating unit corresponding to the 1 st, 2 nd and 3 th columns comprises 2 calculating subunits, and the variable node calculating unit corresponding to the 4 th column comprises 1 calculating subunit. The number of check node calculation units is also 2, and 3 and 4 calculation subunits are respectively included in the check node calculation units corresponding to the 1 st and 2 nd rows.
In the application example, each of the check node-to-variable node information vector storage unit and the variable node-to-check node information vector storage unit has 7 storage blocks CNOD _ MEMijAnd VNOD _ MEMijAnd the variable node-to-check node information vectors and the check node-to-variable node information vectors are respectively used for storing the variable node-to-check node information vectors and the check node-to-variable node information vectors corresponding to the 7 basic matrix nodes. The receiving sequence vector storage unit, the hard decision code word vector storage unit, the initial log-likelihood ratio vector storage unit and the code word log-likelihood ratio vector storage unit are all provided with 4 storage blocks which respectively store the data corresponding to the basic matrix nbVector information for individual columns.
The implementation of the decoding method is described below.
After receiving the soft decision bits, calculating the initial values of a log-likelihood ratio vector array Q and a check node to variable node information vector matrix U,to obtain Q0 (0),Q1 (0),Q2 (0),Q3 (0)And initial values V of 7 elements of the information matrix V corresponding in size, position of the non-zero element and the base matrix00 (0),v01 (0),v02 (0),v10 (0),v11 (0),v12 (0),v13 (0)(v-free)03 (0));
Then, the check node processing array updates the information matrix U from the check node to the variable node of the iteration(k)The formula is:
for i ═ 0b-1
For j ∈ Jset (i)
Wherein the first check node (corresponding to the first row) comprises the operations of 3 nodes, for example, in the first iteration:
the updating formula of the check node is completed by two processes of solving an absolute value and a sign, and the following steps are performed:
for absolute value operations, there are:
where, phi (x) — log (tanh (x/2)) — log (coth (x/2)), x is a real number greater than 0.
The sign operation may be implemented using an and gate. Where each quantity is a vector of z soft bits, the sign vector is represented by zx 1 binary bits and the absolute value vector is represented by zx 5 binary bits when the absolute value and sign are separated. After each soft binary bit is fixed-point, the symbol is represented by a 1-bit binary and the absolute value is represented by a 5-bit binary.
The operation is functionally expressed as: therefore, the unit CNU is calculated at the first check node1In addition, the system also comprises 3 parallel computing subunits which are respectively used for computing information vectors u from check nodes to variable nodes00,u01,u02Correspondingly, the second check node calculation unit CNU1Should include 4 calculation subunits, which are respectively used for calculating the check node to variable node information vector u10,u11,u12,u13. The whole check node processing array has 7 computing subunits CNUijCorresponding to 7 nodes of the base matrix, respectively.
Referring to fig. 4 and fig. 5, the connection relationship between the check node processing array and the check node to the variable node information vector storage unit and the connection relationship between the variable node to the check node information vector storage unit, that is, the addressing relationship, are shown. As can be seen from fig. 4, each check node calculation unit reads data from the variable node corresponding to the non-1 element in the corresponding base matrix row to the memory block in the check node information vector storage unit. Although the calculation subunits in the check node calculation unit are not shown in the figure, as can be seen from the update formula of the check node, each calculation subunit reads data from the corresponding storage block in the variable node to check node information vector storage unit of other non-1 elements except the corresponding element of the subunit in the corresponding row, such as the calculation subunit CNU00Is from the memory block VNOD _ MEM \u01、VNOD_MEM_02And taking data.
As can be seen from fig. 5, each check node calculation unit is a corresponding storage block in the check node to variable node information vector storage unit that outputs non-1 elements in the corresponding base matrix row. Although the calculation subunit in the check node calculation unit is not shown in the figure, it can be seen from the update formula of the check node that the two are associated and connected in one-to-one correspondence through the non-1 element in the respective corresponding basis matrix, i.e. the calculation subunit CNUijOutput to memory block CNOD _ MEMij。
Fig. 6A and 6B show the structure of a check node processing unit corresponding to the first row of the base matrix in an application example. In FIG. 6A, CLS is an abbreviation for circular left shift (CLS h)b(ij)Representing a cyclic left shift h of the vector for one z soft bitsb(ij)Bit, wherein hb(ij)Is a basis matrix HbRow i and column j elements of the base matrix can be read from the corresponding memory block in the base matrix. In a similar manner, CRS hb(ij)Representing a cyclic right shift h of the vector for one z soft bitsb(ij)A bit. The LUT is a Look-up table (Look up table) and is mainly used to implement the function phi (x), and may use a 3-bit piecewise linear approximation (8-step quantization) and may use non-uniform quantization to reduce quantization errors, which is determined by the specific implementation algorithm used. All the above operations are based on a vector of z soft bits, and the LUT implements a look-up table operation for each element of the vector. Similarly, the check node calculation units corresponding to the second row of the basic matrix have similar structures. The two check node processing arrays together form a check node processing array (CNUs) of the decoder. Fig. 6B shows a structure in which the check node processing unit corresponding to the first row of the basic matrix implements the symbolic operation through an and gate.
Continuing with the discussion of the updating of the variable nodes in the application instance, the following equations represent:
for j=0,...,nb-1
for i=0,...,mb-1
for j=0,...,nb-1
the variable node corresponding to column 1 of the base matrix comprises two calculation subunit, the operation is as follows:
and the received code word log-likelihood ratio corresponding to the first column of the basic matrix in the iteration is as follows:
fig. 7 shows the structure of a variable node processing array corresponding to the first column of the base matrix, which consists of adders, all of which are vector additions of size z soft bits. The variable node processing arrays corresponding to the 2 nd to 4 th columns of the base matrix are similar in structure and operation, and together form the variable node processing array of the decoder of the application example.
In terms of addressing, as can be seen from fig. 4, each variable node calculation unit reads data from the check node corresponding to the non-1 element in the corresponding base matrix column to the memory block in the variable node information vector memory unit. Although the calculation subunits in the variable node calculation unit are not shown in the figure, as can be seen from the update formula of the variable nodes, each calculation subunit reads data from the corresponding storage block in the variable node to check node information vector storage unit of other non-1 elements except the corresponding element of the subunit in the corresponding column of the base matrix, such as the calculation subunit VNU00Is from the memory block CNOD _ MEM \u10And taking data.
As can be seen from fig. 5, each variable node calculation unit is a storage block that is output to a variable node to check node information vector storage unit corresponding to a non-1 element in a corresponding base matrix column. Although the calculation subunit in the variable node calculation unit is not shown in the figure, as can be seen from the update formula of the variable node, the twoAssociated and connected in a one-to-one correspondence by means of non-1 elements in the respective corresponding basis matrix, i.e. computing subunits VNUijOutput to memory block VNOD _ MEMij。
Fig. 7A and 7B show structures for implementing the variable node update formula described above using an adder and a subtractor, one for performing absolute value operation and the other for performing sign operation.
The codeword log-likelihood ratio calculating unit is also divided into nbAnd each sub-computing unit is respectively corresponding to one column in the basic matrix, and except for receiving the initial log-likelihood ratio vector in the storage block corresponding to the column of the node in the initial log-likelihood ratio vector storage unit, each sub-computing unit is also required to obtain check node to variable node information vectors of the storage blocks corresponding to all nodes in the column in the check node to variable node information vector storage unit.
Second embodiment of the decoding device of the present invention
The hardware structure of the decoding apparatus of the present embodiment corresponds to a simplified form of log-domain vector decoding method, as shown in fig. 9. The functions to be realized by the present embodiment are consistent with those of the first embodiment. But differ in structure by corresponding to different algorithms.
In order to illustrate the difference between the two, the structural unit in fig. 3 may be divided in another way, that is, divided into an initial value operation module composed of a received codeword vector storage unit, a vector initial value calculation unit, and an initial log-likelihood ratio vector storage unit; the iterative operation module consists of bidirectional networks A and B, a variable node to check node information vector storage unit, a check node to variable node information vector storage unit, a check node processing array, a variable node processing array and a code log-likelihood ratio calculation unit; the basic matrix processing module consists of an original basic matrix storage unit, a basic matrix correction unit and a corrected basic matrix storage unit; a hard decision detection module consisting of a codeword log-likelihood ratio vector storage unit, a hard decision detection unit and a hard decision codeword vector storage unit; and a control module.
Comparing fig. 3 and fig. 9, it can be seen that the units, the functions of the units, and the connection relationships between the units included in the initial value operation module, the basic matrix processing module, and the hard decision detection module in this embodiment are the same as those in the first embodiment, and the only difference is that the vector initial value calculation unit does not need to calculate the initial value of the variable node to check node information vector. The respective units of the 3 modules are not described in detail herein.
As shown in fig. 9, the iterative operation module of the decoding apparatus of this embodiment includes a node update processing array (MPUs), a bidirectional network composed of a read network and a write network, a check node to variable node information vector storage unit, and a codeword log-likelihood ratio calculation unit. Wherein:
the check node to variable node information vector storage unit comprises L storage blocks, wherein L is the number of non-1 elements in the base matrix, each storage block is used for storing L check node to variable node information vectors to be transmitted and output by the node updating processing array, and each check node to variable node information vector corresponds to one non-1 element in the base matrix.
Node update processing array by MbEach corresponding to a basic matrix MbThe computing units of a row. Each computing unit comprises a plurality of computing subunits which respectively correspond to all non-1 elements in the row of the basic matrix, and L computing subunits are shared, each subunit reads a check node to variable node information vector from a corresponding storage block from a check node to a variable node information vector storage unit through a read network, reads a code word log-likelihood ratio vector from a corresponding storage block from the code word log-likelihood ratio vector storage unit, reads a corresponding element value from the modified basic matrix storage unit, completes a node updating operation (see formula (4)), and writes the updated check node to variable node information vector into a corresponding storage block from the check node to the variable node information vector storage unit through a write network.
In the bidirectional buffer network, the read network provides a read address for reading a corresponding vector from the check node to the variable node information vector storage unit, the code log-likelihood ratio vector storage unit and the corrected basic matrix storage unit by the node updating processing array, and the write network provides a write address for writing the check node to the variable node information vector into the check node to the variable node information vector storage unit by the node updating array. More specifically, for a calculation subunit in the node update processing array corresponding to a certain non "-1" element of the base matrix, the read network connects a storage block, corresponding to all other non "-1" elements except the element, in the row of the base matrix in which the element is located, in the check node to variable node information vector storage unit, connects a storage block, corresponding to all other non "-1" elements except the element, in the codeword log-likelihood ratio vector storage unit, in the row of the base matrix in which the element is located, and also connects a storage block, corresponding to the element, in the modified base matrix storage unit.
The function of the codeword log-likelihood ratio calculating unit is the same as that of the first embodiment and is not repeated.
The control module of this embodiment is configured to control and coordinate the decoding process performed by each unit, and also includes an initialization step, an iterative decoding step, and a decoding detection and output step, where the initialization step and the decoding detection and output step are basically the same as those of the first embodiment, and the difference is only that the initial value of the information vector from the variable node to the check node does not need to be calculated, and the two steps are not repeated.
The iterative decoding step is realized by the following operations: and in each clock cycle, the node updating processing array of the decoder reads the information vectors from the check nodes to the variable nodes from the check node to variable node information vector storage unit, reads the code word log-likelihood ratio vectors from the code word log-likelihood ratio vector storage unit, reads the element values of the basic matrix from the corrected basic matrix storage unit, sends the element values to the corresponding sub-calculation units to complete node updating operation, and writes the obtained information vectors from the check nodes to the variable nodes into the corresponding storage blocks of the check node to variable node information vector storage unit.
And simultaneously, in each clock cycle, reading an initial log-likelihood ratio vector and a check node to variable node information vector from each storage block of the initial log-likelihood ratio vector storage unit and the check node to variable node information vector storage unit, sending the initial log-likelihood ratio vector and the check node to variable node information vector to a corresponding code word log-likelihood ratio calculation subunit, calculating the code word log-likelihood ratio vector of the iteration, and writing the code word log-likelihood ratio vector into a corresponding storage block of the code word log-likelihood ratio vector storage unit.
In summary, the present invention can adopt the same decoder structure for different code lengths. The difference is that the h is saved for correction reasonsij bThe contents of the registers are different, and the effective vector length in the cyclic shift register of each node is different due to different spreading factors. In the aspect of operation, the inter-vector operation and the decoding process are completely the same; the vector length of the vector operation in the node is different, and the bit number of the cyclic shift is different. Therefore, the decoder provided by the invention has the same hardware topological structure for the LDPC codes with the same code rate and different code lengths based on the vector decoding algorithm, and has the advantages of minimum required storage space and low hardware implementation complexity for the common BP algorithm, thereby being suitable for parallel implementation. The decoder of the invention is suitable for being used in a large-scale integrated circuit or FPGA (hardware implementation) and can also be used in DSP (software implementation).
Therefore, the algorithm and the decoder provided by the invention do not need to store and access a large parity check matrix, and only need a basic matrix, so the complexity is greatly reduced; since there is no need to store the parity check matrix, there is no need to store the access indices of the sparse matrix elements, thus significantly reducing the need for storage capacity; the operation can be completed only by the basic matrix, so that the step of matrix expansion is omitted; because the topology of the decoder only depends on the basic matrix, the LDPC codes with different code rates and different code lengths can adopt a uniform decoder. Since the algorithm is based on vector operations, it is well suited for parallel operations. In conclusion, the algorithm and the decoder are the best scheme based on the LDPC code of the unit matrix and the cyclic shift matrix, and have great significance particularly for the variable code length condition. The encoder can also be implemented with similar vector operations, in which case the LDPC code based on the unit matrix and the cyclic shift matrix can be changed to a vector LDPC code.
The present invention may also have various transformations based on the above embodiments, for example, in another embodiment, when the decoder only corresponds to one code length, the element values may be directly read out from the basic matrix storage unit without the basic matrix modification unit, or the corresponding data may be directly configured to the corresponding node operation array.
Claims (15)
1. LDPC code vector decoding method based on unit array and cyclic shift array thereof, which adopts check matrix
<math>
<mrow>
<mi>H</mi>
<mo>=</mo>
<msub>
<mrow>
<msub>
<mrow>
<mo>{</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>ij</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>z</mi>
<mo>×</mo>
<mi>z</mi>
</mrow>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
Uniquely corresponding to the base matrix
<math>
<mrow>
<msub>
<mi>H</mi>
<mi>b</mi>
</msub>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>h</mi>
<mi>ij</mi>
<mi>b</mi>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
<math>
<mrow>
<mo>∀</mo>
<mi>i</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math>
<math>
<mrow>
<mo>∀</mo>
<mi>j</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math>
The number of iterations is k, the spreading factor is z, Iset (j) is HbColumn j of the non-1 element row index set, Jset (i) is HbThe ith row of the non-1 element column index set, the method comprising the steps of:
(a) receiving data Y ═ Y input to decoder0,y1,…,yN-1]Is divided into nbGroup, order received sequence vector array
<math>
<mrow>
<mi>R</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Element R in (1)j=[yjz,yjz+1,…,y(j+1)z-1];
(b) Setting k to be 0, obtaining an initial value of a reliability vector array according to a receiving sequence vector array R, and obtaining an initial value of a transmission information vector matrix, wherein the vectors are vectors of 1 xz soft bits;
(c) the non-1 element value h of the transfer information vector matrix, the credibility vector array and the basic matrix obtained by utilizing the k-1 iterationij bPerforming updating operation to obtain a transfer information vector matrix and a reliability vector array after the kth iteration, wherein the minimum operation unit in all the operations is a vector of 1 xz soft bits;
(d) carrying out hard decision on the credibility vector array to obtain a hard decision vector array
<math>
<mrow>
<mi>S</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
SjIs a 1 xz row vector, then is based on
<math>
<mrow>
<msub>
<mi>T</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</munderover>
<msubsup>
<mi>P</mi>
<mi>ij</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mi>S</mi>
<mi>j</mi>
<mi>T</mi>
</msubsup>
</mrow>
</math>
Calculating to obtain parity check vector array
<math>
<mrow>
<mi>T</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>T</mi>
<mi>i</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>;</mo>
</mrow>
</math>
(e) Judging whether the vector array T is all 0, if so, successfully decoding, outputting a hard decision codeword, and ending; otherwise, let k equal to k +1, judge whether k is less than the maximum number of iterations again, if yes, return to step (c), otherwise, decode fail, finish.
2. The vector decoding method of claim 1, wherein the method is a reduced form log-domain vector decoding method, wherein:
in the step (b), the information vector matrix from the check node to the variable node is completed by utilizing the received data vector array R
<math>
<mrow>
<mi>U</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>u</mi>
<mi>ij</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Sum-codeword log-likelihood ratio vector array
<math>
<mrow>
<mi>Q</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>Q</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
The operation of all the initial values of the non-zero vectors is completed by the following loop operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), formula
<math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mn>2</mn>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>/</mo>
<msup>
<mi>σ</mi>
<mn>2</mn>
</msup>
<mo>,</mo>
</mrow>
</math>
σ2Is the variance of the noise;
the step (c) is further divided into the following steps:
(c1) check node to variable node information vector matrix U according to last iteration(k-1)Sum codeword log likelihood ratio vector array Q(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)And updating the nodes by all the non-zero vectors, wherein the step is completed by the following circular operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
wherein jset (i) \\ j indicates a set of jset (i) excluding a certain column index j;
(c2) vector data Q from initial log-likelihood ratios(0)And the check node to variable node information direction of the iterationQuantity matrix U(k)Calculating the code word log likelihood ratio vector array Q of the iteration(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating
<math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<munder>
<mi>Σ</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mo>∈</mo>
<mi>Iset</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<msubsup>
<mi>u</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mi>j</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>;</mo>
</mrow>
</math>
And in the step (d), the code word log likelihood ratio vector array Q is processed(k)And carrying out hard decision.
3. The vector decoding method of claim 1, wherein the method is a general-type log-domain vector decoding method, wherein:
in the step (b), the information vector matrix from the variable nodes to the check nodes is completed by utilizing the received data vector array R
<math>
<mrow>
<mi>V</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>v</mi>
<mi>ij</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Sum-codeword log-likelihood ratio vector array
<math>
<mrow>
<mi>Q</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>Q</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
The calculation of the initial values of all the non-zero vectors is completed by the following loop operations: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:
<math>
<mrow>
<msubsup>
<mi>v</mi>
<mi>ij</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mn>2</mn>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>/</mo>
<msup>
<mi>σ</mi>
<mn>2</mn>
</msup>
<mo>,</mo>
</mrow>
</math>
σ2is the variance of the noise;
the step (c) is further divided into the following steps:
(c1) v according to last iteration(k-1)And R(k-1)Updating the check node to variable node information vector matrix U of the iteration(k)And updating check nodes by all non-zero vectors, wherein the step is completed by the following cyclic operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
(c2) from initial log-likelihood ratio vector array Q(0)And the information vector matrix U from the check node to the variable node of the iteration(k)Calculating the information vector matrix V from the variable node to the check node of the iteration(k)And updating variable nodes by all non-zero vectors, and completing the following cyclic operation: outer loop j ═ 0.. multidot.nb-1, internal cycle i ═ 0.., mb-1, the formula:
simultaneously calculating the code word log likelihood ratio array Q of the iteration(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating:
<math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<munder>
<mi>Σ</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mo>∈</mo>
<mi>Iset</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<msubsup>
<mi>u</mi>
<mrow>
<msup>
<mi>i</mi>
<mo>′</mo>
</msup>
<mi>j</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>;</mo>
</mrow>
</math>
and in the step (d), the code word log likelihood ratio vector array Q is processed(k)And carrying out hard decision.
4. The vector decoding method of claim 1, wherein the method is a probability domain vector decoding method, wherein:
in the step (b), the received data array R is used for calculating the information vector matrix from the variable nodes to the check nodes
<math>
<mrow>
<msup>
<mi>Q</mi>
<mn>0</mn>
</msup>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>Q</mi>
<mi>ij</mi>
<mn>0</mn>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
<math>
<mrow>
<msup>
<mi>Q</mi>
<mn>1</mn>
</msup>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>Q</mi>
<mi>ij</mi>
<mn>1</mn>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Sum vector matrix
<math>
<mrow>
<mi>ΔQ</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<mi>Δ</mi>
<msub>
<mi>Q</mi>
<mi>ij</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
And an array of codeword probability vectors
<math>
<mrow>
<msup>
<mi>F</mi>
<mn>0</mn>
</msup>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>F</mi>
<mi>j</mi>
<mn>0</mn>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
And
<math>
<mrow>
<msup>
<mi>F</mi>
<mn>1</mn>
</msup>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>F</mi>
<mi>j</mi>
<mn>1</mn>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
the initial values of all the non-zero vectors are completed by the following loop operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:
the step (c) is further divided into the following steps:
(c1) Δ Q from last iteration(k-1)Updating the check node to variable node information vector matrix R of the iteration0(k),R1(k)And updating check nodes by all non-zero vectors, and completing the updating by the following cyclic operation: outer loop i ═ 0.. multidot.mb-1, internal cycle j ∈ Jset (i) by the formula:
(c2) according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating the information vector matrix Q from the variable node to the check node of the iteration0(k),Q1(k)And updating variable nodes by all non-zero vectors, and completing the following cyclic operation: outer loop j ═ 0.. multidot.nb-1, inner loop i ∈ Iset (j), given by:
meanwhile, according to the initial code word probability vector array F0、F1And the check node to variable node information vector matrix R of the iteration0(k),R1(k)Calculating a pseudo posterior probability vector array F with variable nodes n taking values of 0 and 10(k),F1(k)All non-zero vectors in (i.e., 0, n, j) for any one of jb-1, calculating:
wherein alpha isijAnd betaijTo normalize the coefficients so that
And in said step (d), is according to F0(k),F1(k)And obtaining a vector array S by hard judgment of the size of the vector.
5. The vector decoding method of claim 1, wherein the vector operation comprises a vector four-way operation, a vector cyclic shift and a vector function operation, the vector four-way operation is performed by four-way operation of corresponding elements of two vectors, Pij′Multiplication by a vector by a cyclic right shift h of the vector elementsij bBit to complete, Pij′ -1Multiplication by a vector by a cyclic left shift h of the vector elementsij bBit, and the function operation of the vector is performed by performing a function on each element in the vector.
6. The vector decoding method according to claim 2 or 3, wherein the check node to variable node information vector and the variable node to check node information vector are represented by fixed points, each vector comprising z soft bits, each soft bit fixed point being 6 binary bits.
7. The vector decoding method according to claim 2, 3 or 4, wherein the check node update process of iterative decoding is implemented by using the normalized belief propagation algorithm or one of the following approximate algorithms of the algorithm: BP-Based algorithm, APP-Based algorithm, uniform maximum confidence propagation algorithm, min-sum algorithm, and min-sum look-up table algorithm.
8. An LDPC code vector decoding device based on unit array and cyclic shift array thereof is characterized by comprising a basic matrix processing module, an initial value operation module, an iteration operation module, a hard decision detection module and a control module, wherein:
the basic matrix processing module comprises a basic matrix storage unit which is provided with L storage blocks, and each storage block is used for storing the basic matrix
<math>
<mrow>
<msub>
<mi>H</mi>
<mi>b</mi>
</msub>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msubsup>
<mi>h</mi>
<mi>ij</mi>
<mi>b</mi>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Is not-1 element value
<math>
<mrow>
<msubsup>
<mi>h</mi>
<mi>ij</mi>
<mi>b</mi>
</msubsup>
<mo>≠</mo>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
</mrow>
</math>
L is the number of non-1 elements in the base matrix,
<math>
<mrow>
<mo>∀</mo>
<mi>i</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math>
<math>
<mrow>
<mo>∀</mo>
<mi>j</mi>
<mo>∈</mo>
<mo>[</mo>
<mn>0,1</mn>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>]</mo>
<mo>;</mo>
</mrow>
</math>
the initiationThe value operation module is used for receiving input data Y ═ Y0,y1,…,yN-1]And buffered at nbIn each memory block, calculating initial value of the confidence vector array, and storing in nbIn each storage block, obtaining an initial value of a transfer information vector matrix;
the iterative operation module is used for utilizing the transmission information vector matrix, the credibility vector array and the non-1 element value h of the basic matrix obtained by the last iterationijPerforming updating operation to obtain a transmission information vector matrix and a credibility vector array after the iteration;
the hard decision detection module is used for carrying out hard decision on the credibility vector array obtained by iteration to obtain a hard decision vector array
<math>
<mrow>
<mi>S</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
Is stored in nbIn a memory block, then according to
<math>
<mrow>
<msub>
<mi>T</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</munderover>
<msubsup>
<mi>P</mi>
<mi>ij</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mi>S</mi>
<mi>j</mi>
<mi>T</mi>
</msubsup>
</mrow>
</math>
Calculating and judging the obtained odd-even check vector array
<math>
<mrow>
<mi>T</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>T</mi>
<mi>i</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<msub>
<mi>m</mi>
<mi>b</mi>
</msub>
<mo>×</mo>
<mn>1</mn>
</mrow>
</msub>
</mrow>
</math>
Whether it is all 0 s;
the control module is used for controlling other modules to complete initial value operation, iterative operation and hard decision detection, when the array T is all 0, a hard decision code word is output, decoding is successful, and the decoding is finished; when T is not all 0, judging whether the iteration times is less than the maximum iteration times, if so, continuing the next iteration, if the maximum iteration times is reached, failing to decode, and ending;
and all the storage blocks are storage blocks for storing z soft bits, the operations between each array and matrix elements are vector operations with the size of the z soft bits, the computing unit of each module directly reads and writes data from the corresponding storage block, the information data transmitted each time is always integral multiple of the z soft bits, wherein z is an expansion factor.
9. The vector coding apparatus of claim 8, wherein the size of the memory block is zmaxA soft bit, wherein zmaxIs the expansion factor corresponding to the low density parity check code with the maximum code length of the specific code rate.
10. The vector decoding device of claim 8, wherein each computing unit is connected to the corresponding memory block by hardware to implement data addressing.
11. The vector decoding apparatus according to claim 8, wherein the basis matrix storage unit in the basis matrix processing module stores therein the element values of the original basis matrix; or, the basic matrix storage unit in the basic matrix processing module is a modified basic matrix storage unit, the processing module further includes an original basic matrix storage unit and a basic matrix modification unit, and the calculation unit of the iterative operation module is further connected to a corresponding storage block of the modified basic matrix storage unit to read data.
12. The vector decoding apparatus of claim 8, wherein the initial value operation module comprises:
a received codeword vector storage unit for buffering the received codeword sequence Y ═ Y0,y1,…,yN-1]To receive a sequence vector array
<math>
<mrow>
<mi>R</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
</mrow>
</math>
Is stored in the form of nbIn memory blocks, each memory block stores a vector Rj=[yjz,yjz+1,…,y(j+1)z-1];
A vector initial value calculation unit for reading out the received sequence vector RjCalculating an initial log-likelihood ratio vector array
<math>
<mrow>
<mi>Q</mi>
<mo>=</mo>
<msub>
<mrow>
<mo>{</mo>
<msub>
<mi>Q</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>×</mo>
<msub>
<mi>n</mi>
<mi>b</mi>
</msub>
</mrow>
</msub>
<mo>,</mo>
</mrow>
</math>
<math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mn>2</mn>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>/</mo>
<msup>
<mi>σ</mi>
<mn>2</mn>
</msup>
<mo>,</mo>
</mrow>
</math>
σ2Is the variance of the noise;
an initial log-likelihood ratio vector storage unit comprising nbA storage block for storing n of the initial log-likelihood ratio vector arraybVector Qj。
13. The vector decoding apparatus according to claim 12, wherein the iterative operation module includes a check node to variable node information vector storage unit, a node update processing array, a bidirectional buffer network composed of a read network and a write network, and a codeword log-likelihood ratio calculation unit, wherein:
the check node to variable node information vector storage unit comprises L storage blocks, each storage block is used for storing L check node to variable node information vectors to be transmitted and output by the node updating processing array, and each check node to variable node information vector corresponds to a non-1 element in the basic matrix;
the node update processing array is formed by MbEach corresponding to a basic matrix MbEach computing subunit reads data from corresponding storage blocks of a check node to a variable node information vector storage unit and a code word log-likelihood ratio vector storage unit through a read network to complete one-time node updating operation, and then writes updated check node to variable node information vectors into corresponding storage blocks of the check node to the variable node information vector storage unit through a write network;
in the bidirectional buffer network, for a calculation subunit corresponding to a certain non "-1" element of the base matrix in the node update processing array, the read network connects a storage block corresponding to all other non "-1" elements except the element in a row of the base matrix where the element is located in a check node to variable node information vector storage unit, and connects a storage block corresponding to all other non "-1" elements except the element in a row of the base matrix where the element is located in a codeword log-likelihood ratio vector storage unit;
the code word log likelihood ratio calculating unit is composed of nbEach calculating subunit obtains a log-likelihood ratio vector initial value and a check node to variable node information vector after the iteration from an initial log-likelihood ratio vector storage unit and the corresponding storage block in a check node to variable node information vector storage unit, and calculates a code word log-likelihood ratio vector of the iteration.
14. The vector decoding apparatus according to claim 12, wherein the iterative operation module includes a check node to variable node information vector storage unit, a variable node to check node information vector storage unit, a variable node processing array, a check node processing array, a bidirectional buffer network including a read network a, a write network a, a read network B, and a write network B, and a codeword log-likelihood ratio calculation unit, wherein:
the check node to variable node information vector storage unit comprises L storage blocks, each storage block is used for storing a check node to variable node information vector, and each check node to variable node information vector corresponds to a non-1 element in the basic matrix;
the variable node to check node information vector storage unit comprises L storage blocks, each storage block is used for storing a variable node to check node information vector, and each variable node to check node information vector corresponds to a non-1 element in the basic matrix;
the variable node processing array is composed of NbEach calculation unit comprises a plurality of calculation subunits corresponding to all non-1 elements in the corresponding column of the variable node in the basic matrix, and each calculation subunit reads data from the corresponding storage block from the check node to the variable node information vector storage unit and the initial log-likelihood ratio vector storage unit through a read network B to finish the variable nodePerforming point updating operation, and writing the updated variable node to check node information vector into a corresponding storage block of a variable node to check node information vector storage unit through a writing network B;
the check node processing array is composed of MbEach calculation subunit reads data from a corresponding storage block of the variable node to the check node information vector storage unit through a read network A, completes check node updating operation by combining values of elements corresponding to the calculation subunits in the basic matrix, and writes updated check node to variable node information vectors into corresponding storage blocks in the check node to variable node information vector storage unit through a write network A;
for a calculation subunit in the check node processing array corresponding to a certain non "-1" element of the basic matrix, the read network a connects a corresponding storage block of all other non-1 elements except the element in the row of the basic matrix where the element is located in the variable node to check node information vector storage unit, and the write network a connects a corresponding storage block of the element in the check node to variable node information vector storage unit;
for a calculation subunit corresponding to a certain non "-1" element of the base matrix in the variable node processing array, the read network B connects a corresponding storage block of all other non-1 elements except the element in the column of the base matrix where the element is located in the storage unit from the check node to the variable node information vector, also connects a corresponding storage block of the column of the base matrix where the element is located in the storage unit from the initial log-likelihood ratio vector, and the write network B connects a corresponding storage block of the element in the storage unit from the variable node to the check node information vector;
the code word log likelihood ratio calculating unit is composed of nbEach calculating subunit obtains log-likelihood from initial log-likelihood ratio vector storage unit and check node to corresponding storage block in variable node information vector storage unitAnd calculating a code word log-likelihood ratio vector of the iteration by using the initial value of the ratio vector and the information vector from the check node to the variable node after the iteration.
15. The vector decoding apparatus of claim 13 or 14, wherein the hard decision detection module comprises:
a codeword log-likelihood ratio vector storage unit comprising nbA storage block for storing n obtained from each iterationbIndividual codeword log-likelihood ratio vector Qj (k);
A hard decision detection unit for performing hard decision on the codeword log-likelihood ratio vector Q generated by decoding to obtain nbHard decision codeword vector of each parity check vector array T is judged, and whether the parity check vector array T is all 0 is judged;
a hard decision codeword vector storage unit comprising nbA storage block for storing n obtained by hard decisionbA hard decision codeword vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200510114589A CN100589357C (en) | 2005-10-26 | 2005-10-26 | LDPC code vector decode translator and method based on unit array and its circulation shift array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200510114589A CN100589357C (en) | 2005-10-26 | 2005-10-26 | LDPC code vector decode translator and method based on unit array and its circulation shift array |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1956368A CN1956368A (en) | 2007-05-02 |
CN100589357C true CN100589357C (en) | 2010-02-10 |
Family
ID=38063490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200510114589A Expired - Fee Related CN100589357C (en) | 2005-10-26 | 2005-10-26 | LDPC code vector decode translator and method based on unit array and its circulation shift array |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100589357C (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101345601B (en) * | 2007-07-13 | 2011-04-27 | 华为技术有限公司 | Interpretation method and decoder |
CN101350695B (en) * | 2007-07-20 | 2012-11-21 | 电子科技大学 | Method and system for decoding low density parity check code |
CN101911503A (en) * | 2007-12-29 | 2010-12-08 | 上海贝尔股份有限公司 | Encoding method and encoding device of LDPC codes |
CN102904581B (en) * | 2011-07-26 | 2017-03-01 | 无锡物联网产业研究院 | The building method of LDPC check matrix and device |
CN105227191B (en) * | 2015-10-08 | 2018-08-31 | 西安电子科技大学 | Based on the quasi-cyclic LDPC code coding method for correcting minimum-sum algorithm |
CN106201781B (en) * | 2016-07-11 | 2019-02-26 | 华侨大学 | A kind of cloud date storage method based on the right canonical correcting and eleting codes |
CN107733440B (en) * | 2016-08-12 | 2022-12-02 | 中兴通讯股份有限公司 | Polygonal structured LDPC processing method and device |
CN108270510B (en) * | 2016-12-30 | 2020-12-15 | 华为技术有限公司 | Communication method and communication equipment based on LDPC code |
CN111492586B (en) | 2017-12-15 | 2022-09-09 | 华为技术有限公司 | Method and device for designing basic matrix of LDPC code with orthogonal rows |
CN110661593B (en) * | 2018-06-29 | 2022-04-22 | 中兴通讯股份有限公司 | Decoder, method and computer storage medium |
CN111106837B (en) * | 2018-10-26 | 2023-09-08 | 大唐移动通信设备有限公司 | LDPC decoding method, decoding device and storage medium |
CN109766214A (en) * | 2019-04-01 | 2019-05-17 | 苏州中晟宏芯信息科技有限公司 | A kind of optimal H-matrix generation method and device |
CN111431543B (en) * | 2020-05-13 | 2023-08-01 | 东南大学 | Variable code length and variable code rate QC-LDPC decoding method and device |
-
2005
- 2005-10-26 CN CN200510114589A patent/CN100589357C/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
LDPC码译码算法的研究. 杨兴丽.中国优秀硕士学位论文全文数据库. 2004 * |
Also Published As
Publication number | Publication date |
---|---|
CN1956368A (en) | 2007-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100589357C (en) | LDPC code vector decode translator and method based on unit array and its circulation shift array | |
JP7372369B2 (en) | Structural LDPC encoding, decoding method and device | |
US8984376B1 (en) | System and method for avoiding error mechanisms in layered iterative decoding | |
US10230396B1 (en) | Method and apparatus for layer-specific LDPC decoding | |
US8990661B1 (en) | Layer specific attenuation factor LDPC decoder | |
CN102412847B (en) | Method and apparatus for decoding low density parity check code using united node processing | |
US7395494B2 (en) | Apparatus for encoding and decoding of low-density parity-check codes, and method thereof | |
US9813080B1 (en) | Layer specific LDPC decoder | |
US9075738B2 (en) | Efficient LDPC codes | |
US7373581B2 (en) | Device, program, and method for decoding LDPC codes | |
US10298261B2 (en) | Reduced complexity non-binary LDPC decoding algorithm | |
US8984365B1 (en) | System and method for reduced memory storage in LDPC decoding | |
US8504895B2 (en) | Using damping factors to overcome LDPC trapping sets | |
US20050283707A1 (en) | LDPC decoder for decoding a low-density parity check (LDPC) codewords | |
US20090319860A1 (en) | Overcoming ldpc trapping sets by decoder reset | |
CN111615793A (en) | Vertical layered finite alphabet iterative decoding | |
CN104868925A (en) | Encoding method, decoding method, encoding device and decoding device of structured LDPC codes | |
WO2018036178A1 (en) | Decoding method for low density parity check code (ldpc) | |
WO2021063217A1 (en) | Decoding method and apparatus | |
Thi et al. | Basic-set trellis min–max decoder architecture for nonbinary ldpc codes with high-order galois fields | |
CN1937413A (en) | Double-turbine structure low-density odd-even check code decoder | |
CN113783576A (en) | Method and apparatus for vertical layered decoding of quasi-cyclic low density parity check codes constructed from clusters of cyclic permutation matrices | |
CN112204888A (en) | QC-LDPC code with high-efficiency coding and good error code flat layer characteristic | |
CN100544212C (en) | The loe-density parity-check code decoder of minimizing storage demand at a high speed | |
KR101657912B1 (en) | Method of Decoding Non-Binary Low Density Parity Check Codes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100210 Termination date: 20151026 |
|
EXPY | Termination of patent right or utility model |