CN114499540A - LDPC layered decoding method based on two-way LLR updating - Google Patents

LDPC layered decoding method based on two-way LLR updating Download PDF

Info

Publication number
CN114499540A
CN114499540A CN202111550108.4A CN202111550108A CN114499540A CN 114499540 A CN114499540 A CN 114499540A CN 202111550108 A CN202111550108 A CN 202111550108A CN 114499540 A CN114499540 A CN 114499540A
Authority
CN
China
Prior art keywords
sub
matrix
overlapped
llr
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111550108.4A
Other languages
Chinese (zh)
Inventor
张俊杰
李云峰
李�昊
冯智波
谭家乐
陈天杨
宋英雄
陈健
张倩武
曹炳尧
李迎春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202111550108.4A priority Critical patent/CN114499540A/en
Publication of CN114499540A publication Critical patent/CN114499540A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1148Structural properties of the code parity-check or generator matrix

Abstract

The invention discloses an LDPC layered decoding method and a decoder based on two-way LLR updating, wherein three buffer areas are used for respectively caching variable node information transmitted to a check node operation unit by a variable node operation unit for updating LLR, and after the check node information is obtained through calculation, a two-way adder is used for respectively updating the LLR of an overlapped sub-matrix and a non-overlapped sub-matrix. The LDPC layered decoding method based on two-way LLR updating can ensure that the problem of data updating conflict does not occur under the condition that the number of the pipeline stages is not more than the number of the non-overlapping sub-matrixes of the layer, and the number of the data updating conflict is reduced and the performance loss of the LDPC decoder is negligible under the condition that the number of the pipeline stages is more than the number of the non-overlapping sub-matrixes of the layer.

Description

LDPC layered decoding method based on two-way LLR updating
Technical Field
The invention relates to the technical field of communication coding and decoding, in particular to an LDPC layered decoding method based on two-way LLR updating.
Background
The LDPC is used as a forward error correction code and has wide application in the fields of wireless communication, wired communication, aerospace, data storage and the like. The throughput rate of an LDPC decoder determines the upper limit of the data processing rate in a communication system. Under the condition that the code length and the code rate are determined and the order of the submatrix is taken as a unit for layering, the method for improving the throughput rate of the LDPC decoder only improves the working frequency of a clock and reduces the pipeline time. The working clock period is fixed because of the hierarchical decoding algorithm, and the data volume of one sub-matrix is processed in each clock period. Increasing the clock frequency may be accomplished by inserting pipelines on critical paths where the combinational logic delays are too long.
In the LDPC layered decoding process, it is common that non-zero submatrices overlap from layer to layer. In the hierarchical decoding algorithm, the Log-Likelihood Ratio (LLR) of a submatrix of a certain layer is updated earlier than the LLR of the submatrix of the next layer. In order to increase the throughput of the decoder and increase the clock operating frequency of the decoder, a certain number of pipeline stages have to be inserted, which causes data update conflicts. The existing method of further layering layers with the sub-matrix order as a unit can reduce the number of data update conflicts, but reduces the throughput rate of the LDCP decoder. The existing method for adjusting the processing sequence between the in-layer submatrices and the layers through optimization algorithms such as a genetic algorithm and an ant colony algorithm can reduce the number of data updating conflicts, but the calculation complexity and the time consumption for optimizing the processing sequence of the submatrices are high. In the existing method, only one data path for updating the LLR is provided, so that the LLR can only be updated according to the processing order of the sub-matrix in the layer, and the updating order cannot be adjusted to meet the processing requirement of the sub-matrix in the next layer. Meanwhile, the LLRs must be stored in the LLR memory and then read out, which increases the delay of LLR update.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is that the existing LDPC layered decoder has data update conflicts in the process of updating LLRs in a pipeline form, and in the process of reducing the number of data update conflicts, the throughput of the decoder is reduced, or the computational complexity is high, the time consumption is long, and other problems are caused. The invention provides an LDPC layered decoding method based on two-way LLR updating, which can ensure that the problem of data updating conflict does not occur under the condition that the number of the pipeline stages is not more than the number of the non-overlapping sub-matrixes of the layer, and reduce the number of the data updating conflict and neglect the performance loss of an LDPC decoder under the condition that the number of the pipeline stages is more than the number of the non-overlapping sub-matrixes of the layer.
In order to achieve the above object, the present invention provides an LDPC layered decoding method based on two-way LLR updating, comprising the steps of:
before decoding, preprocessing the processing sequence of the submatrices in the check matrix;
the variable node information obtained by calculation of the variable node processing unit is divided into two types according to whether the previous layer and the next layer need to be processed or not, the variable node information corresponding to the sub-matrix which needs to be processed at the current layer and the next layer is stored in an overlapped sub-matrix buffer area or an adjusted sequence sub-matrix buffer area, and the rest is stored in a non-overlapped sub-matrix buffer area;
after the check node processing unit calculates the check node information in the current layer, sequentially reading data from the buffer areas of the overlapped submatrix and the non-overlapped submatrix respectively, and performing parallel updating through two adders to calculate and obtain the latest LLR;
if the overlapped submatrix has no data updating conflict, the updated LLR is directly input into the barrel shifter by a bypass way and directly participates in the operation of the next layer; thus, one round of decoding is completed.
Further, the processing sequence of the sub-matrix of the current layer is determined by the overlapping sub-matrix and the non-overlapping sub-matrix of the previous layer together, the non-overlapping sub-matrix is processed preferentially, then the overlapping sub-matrix is processed, and the non-overlapping sub-matrix and the overlapping sub-matrix are processed from small to large according to the sub-matrix subscript.
Further, before decoding, preprocessing a sub-matrix in the check matrix, specifically including:
the processing sequence of the submatrices in the check matrix is pre-ordered, the sequence of the submatrices is fixed, each round of iterative decoding is also set to be continuous, each layer of the submatrices is sequentially processed in the first iterative process, the first layer of the submatrices of the second iteration is processed after the last layer of the submatrices is processed in the first iterative process, the processing sequence of the first layer of the submatrices of the next iteration is determined by the last layer of the submatrices of the previous iteration, and the classification of the overlapped submatrices and the non-overlapped submatrices is also determined by the last layer of the submatrices.
Furthermore, the preprocessed data, the data needed to be processed in the current layer and the next layer are stored in the overlapped sub-matrix buffer area, the rest are stored in the non-overlapped sub-matrix buffer area, and the sequence adjustment sub-matrix buffer area is used for storing the sub-matrix of which the previous layer is the non-overlapped sub-matrix and the current layer is the overlapped sub-matrix.
And the two adders operate in parallel without mutual interference and comprise a first adder and a second adder, wherein after the check node operation unit calculates the check node information, the LLRs of the non-overlapping sub-matrixes are updated by the first adder alone, the information of the sub-matrixes is read by the overlapping sub-matrixes from the non-overlapping sub-matrix buffer area and the adjustment sequence sub-matrix buffer area, and the LLRs are updated by the second adder alone.
Further, if there is no data updating conflict in the overlapped sub-matrix during the decoding process, the overlapped sub-matrix is buffered in the overlapped sub-matrix buffer area or the adjustment sequence sub-matrix buffer area, if there is data updating conflict, the overlapped sub-matrix with data updating conflict is not stored in any buffer area, and the rest overlapped sub-matrices are stored in the overlapped sub-matrix buffer area or the adjustment sequence sub-matrix buffer area according to the processing sequence.
Further, when the overlapped submatrix in the current layer has no data update conflict, the LLR is updated through the adder, the updated LLR is input into the barrel shifter by a bypass way, if a certain overlapped submatrix in the current layer has data update conflict, the LLR is not updated by the submatrix, and the LLR is read from an LLR memory in the decoding process of the next layer.
Further, the LLR of the submatrix is calculated by the barrel shifter and the variable node processing unit to obtain variable node information.
Furthermore, before decoding, the method also comprises the step of determining the pipeline stage number needing to be inserted, and designing the decoder according to the pipeline stage number needing to be inserted.
Another preferred embodiment of the present invention provides a decoder for implementing an LDPC layered decoding method based on two-way LLR updating, comprising a barrel shifter, a variable node processing unit, a non-overlapping sub-matrix buffer, an adjustment order sub-matrix buffer, a check node operation unit, a first way adder and a second way adder, wherein the LLR of the sub-matrix is calculated by the barrel shifter and the variable node processing unit to obtain variable node information, and then the sub-matrix is buffered in the non-overlapping sub-matrix buffer if it does not continue to participate in the operation in the next layer, and if the sub-matrix subscript is greater than the same type sub-matrix subscript in the following, the sub-matrix is buffered in the adjustment order sub-matrix buffer, and the remaining sub-matrices in the next layer that need to participate in the operation processing are buffered in the overlapping sub-matrix buffer in order, after the check node operation unit calculates the minimum value and the second minimum value of the current layer, the overlapped sub-matrix and the non-overlapped sub-matrix are updated through the two adders respectively.
Technical effects
The invention provides an LDPC layered decoding method and a decoder based on two-way LLR updating, which adopt the design that sub-matrixes participating in operation are divided into overlapping and non-overlapping and cached in two buffer areas, the LLR is updated by using two-way adders respectively for the overlapping and non-overlapping sub-matrixes, and the overlapping sub-matrixes are input to a calculation module of the next layer by a bypass after updating, thereby overcoming the problem of frequent data updating conflict caused by pursuing the high throughput rate of the LDPC layered decoder and inserting a production line for improving time sequence, avoiding the reduction of the decoding performance of the LDPC decoder and simultaneously ensuring the high throughput rate of the LDPC layered decoder.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram of a conventional LDPC layered decoder;
FIG. 2 is a diagram illustrating the structure of an LDPC layered decoder based on two-way LLR update according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram comparing an LDPC layered decoder based on two-way adder update according to the present invention and a conventional LDPC layered decoder.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular internal procedures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The structure of the traditional LDPC layered decoder is schematically shown in FIG. 1. After the sub-matrix is processed by the barrel shifter and the variable node calculation unit, the calculated variable node information is cached in the sub-matrix buffer area. And after waiting for the calculation to obtain the corresponding check node information, updating through an adder to obtain the latest LLR value, and finally storing into an LLR memory. When the throughput rate of the LDPC decoder is improved, a large number of pipeline stages are inserted into a critical timing path, and the structure can cause that the situation that the LLR of the sub-matrix of the current layer is not updated yet and the same sub-matrix of the next layer needs to participate in decoding frequently occurs, namely, data update conflicts cause the performance degradation of the LDPC layered decoder.
As shown in FIG. 2, the present invention provides a decoder for implementing a LDPC layered decoding method based on two-way LLR update, which comprises a barrel shifter, a variable node processing unit, a non-overlapping sub-matrix buffer, an adjusting sequence sub-matrix buffer, a check node operation unit, a first way adder and a second way adder, wherein LLRs of sub-matrices are calculated by the barrel shifter and the variable node processing unit to obtain variable node information, then the sub-matrices are cached in the non-overlapping sub-matrix buffer if not continuously participating in the next layer, if the sub-matrices continue to participate in the operation in the next layer and subscripts of the sub-matrices are greater than subscripts of the same type sub-matrices, the sub-matrices are cached in the adjusting sequence sub-matrix buffer, and the remaining sub-matrices needing to participate in the operation processing in the next layer are cached in the overlapping sub-matrix buffer in sequence, after the check node operation unit calculates the minimum value and the second minimum value of the current layer, the overlapped sub-matrix and the non-overlapped sub-matrix are updated through the two adders respectively. Different from the traditional LDPC decoder, the sub-matrix to be updated is cached to two queues through whether the sub-matrix is overlapped or not, check node information corresponding to the overlapped sub-matrix and the non-overlapped sub-matrix is calculated through two check node information calculating modules respectively, and LLRs are updated through two adders, so that the period of updating the LLRs in the sub-matrices is shortened, and the updated LLRs can be obtained before the overlapped sub-matrix enters the next layer of updating.
Another preferred embodiment of the present invention provides an LDPC layered decoding method based on two-way LLR updating, comprising the following steps:
step 100, preprocessing the processing sequence of the submatrices in the check matrix before decoding; specifically, the processing sequence of the submatrices in the check matrix is pre-ordered, the sequence of the submatrices is fixed, each round of iterative decoding is also set to be continuous, each layer of the submatrices is sequentially processed in the first iterative process, the first layer of the submatrices of the second iteration is processed after the last layer of the submatrices is processed in the first iterative process, the processing sequence of the first layer of the submatrices of the next iteration is determined by the last layer of the submatrices of the previous iteration, and the classification of the overlapping submatrices and the non-overlapping submatrices is also determined by the last layer of the submatrices.
Step 200, the variable node information obtained by calculation of the variable node processing unit is divided into two types according to whether the current layer and the next layer need to be processed, the variable node information corresponding to the sub-matrix which needs to be processed in the current layer and the next layer is stored in an overlapped sub-matrix buffer area or an adjusted sequence sub-matrix buffer area, and the rest is stored in a non-overlapped sub-matrix buffer area; in addition, the method also comprises an adjustment sequence buffer area used for storing the submatrices of which the previous layer is a non-overlapping submatrix and the current layer is an overlapping submatrix. And the LLR of the sub-matrix is calculated by the barrel shifter and the variable node processing unit to obtain variable node information. Wherein the current layer and the next layer are not limited to the same iteration. There is no pause time between the processing of the last layer in the current iteration and the first layer in the next iteration, so the overlapping condition of the last layer in the current iteration and the first layer in the next iteration needs to be considered in the decoding.
Step 300, after the check node processing unit calculates the check node information in the current layer, sequentially reading data from the buffer areas of the overlapped submatrix and the non-overlapped submatrix respectively, and performing parallel update through two adders to calculate the latest LLR; the two adders run in parallel without mutual interference and comprise a first adder and a second adder, wherein after the check node information is calculated by the check node operation unit, the LLRs of the non-overlapping sub-matrixes are updated by the first adder alone, the information of the sub-matrixes is read by the overlapping sub-matrixes from the non-overlapping sub-matrix buffer area and the adjustment sequence sub-matrix buffer area selectively, and the LLRs are updated by the second adder alone.
Step 400, storing the updated LLR of the non-overlapped submatrix into an LLR memory, and if the overlapped submatrix has no data updating conflict, directly bypassing the updated LLR and inputting the updated LLR into a barrel shifter to directly participate in the operation of the next layer; thus, one round of decoding is completed.
The processing sequence of the sub-matrix of the current layer is determined by the overlapped sub-matrix and the non-overlapped sub-matrix of the previous layer together, the non-overlapped sub-matrix is processed preferentially, the overlapped sub-matrix is processed subsequently, and the non-overlapped sub-matrix and the overlapped sub-matrix are processed from small to large according to the subscript of the sub-matrices.
If the overlapped sub-matrix has no data updating conflict in the decoding process, the overlapped sub-matrix is buffered in the overlapped sub-matrix buffer area or the adjustment sequence sub-matrix buffer area, if the data updating conflict exists, the overlapped sub-matrix with the data updating conflict is not stored in any buffer area, and the rest overlapped sub-matrices are stored in the overlapped sub-matrix buffer area or the adjustment sequence sub-matrix buffer area according to the processing sequence.
When the overlapped submatrix in the current layer has no data updating conflict, updating LLR through an adder, bypassing the updated LLR and inputting the updated LLR into the barrel shifter, if a certain overlapped submatrix in the current layer has data updating conflict, the submatrix does not update LLR, and LLR is read from an LLR memory in the decoding process of the next layer.
Before decoding, the method also comprises the step of determining the pipeline stage number needing to be inserted, and designing the decoder according to the pipeline stage number needing to be inserted.
The following will illustrate specific steps of an LDPC layered decoding method based on two-way LLR update according to an embodiment of the present invention, as shown in fig. 3.
The LDPC basis matrix in the example is 3 rows and 8 columns as shown in equation (1)
Figure BDA0003416939360000051
In the realization of the LDPC layered decoding method based on the two-way LLR updating, the pipeline stage number is set as 2-stage pipeline. Because the number of the non-overlapping sub-matrixes of each layer in the matrix is not less than the number of the pipeline stages, the problem of update conflict is avoided.
Before decoding, preprocessing is required for the processing sequence of the sub-matrices. According to the above definition, the processing order of the 0 th row is determined by whether the sub-matrices of the 0 th row and the 2 nd row overlap, and the 1 st, 5 th, and 6 th sub-matrices are prioritized for processing because the 0 th and 4 th sub-matrices overlap, and the processing order is 1, 5, 6, 0, and 4. The processing order of the 1 st row is determined by whether the sub-matrices of the 0 th row and the 1 st row overlap, and the 3 rd and 7 th sub-matrices are processed with priority because the 1 st, 4 th and 5 th sub-matrices overlap, and the processing order is 3, 7, 1, 4 and 5. The processing order of the 2 nd row is determined by whether the sub-matrices of the 1 st row and the 2 nd row overlap, and the 3 rd and 4 th sub-matrices overlap, so that the 0 th, 2 nd and 8 th sub-matrices are processed with priority, and the processing order is 0, 2, 8, 3 and 4.
The variable node information buffering condition after passing through the variable node processing unit is as follows: in the 0 th layer, the 6 th sub-matrix and the 0 th sub-matrix are sequentially buffered to a non-overlapping sub-matrix buffer area, the variable node information corresponding to the 1 st sub-matrix and the 4 th sub-matrix is sequentially buffered to an overlapping sub-matrix buffer area, and the variable node information corresponding to the 5 th sub-matrix is buffered to an adjustment sequence sub-matrix buffer area. In the layer 1, the 7 th, 1 st and 5 th sub-matrixes are sequentially buffered to a non-overlapping sub-matrix buffer area, and the variable node information corresponding to the 3 rd and 4 th sub-matrixes is sequentially buffered to an overlapping sub-matrix buffer area. In the layer 2, the 2 nd, 8 th and 3 rd sub-matrixes are sequentially buffered to a non-overlapping sub-matrix buffer area, and the variable node information corresponding to the 0 th and 4 th sub-matrixes is sequentially buffered to an overlapping sub-matrix buffer area.
After the check node information is calculated, the updating conditions of the two-way LLR of the overlapped submatrix and the non-overlapped submatrix are as follows: in the 0 th layer, the 6 th sub-matrix and the 0 th sub-matrix read out the corresponding variable node information and the check node information from the non-overlapping sub-matrix buffer area in sequence, update to obtain LLRs, and store the LLRs into an LLR memory; and the 1 st, 4 th and 5 th sub-matrixes read out corresponding variable node information from the overlapped sub-matrix buffer area and the adjusted sequential sub-matrix buffer area in sequence, update the variable node information after adding check node information to obtain LLRs, and input the LLRs into the barrel shifter by a bypass. In the layer 1, the 7 th, 1 st and 5 th sub-matrixes sequentially read out the corresponding variable node information and the check node information from the non-overlapping sub-matrix buffer area, update to obtain LLRs and store the LLRs into an LLR memory; and the 3 rd sub-matrix and the 4 th sub-matrix sequentially read out corresponding variable node information from the overlapped sub-matrix buffer area, update the variable node information after adding check node information to obtain LLR, and bypass input the LLR into the barrel shifter. In the layer 2, the sub-matrixes 2, 8 and 3 read out the corresponding variable node information and check node information from the non-overlapping sub-matrix buffer in sequence, update to obtain LLRs, and store the LLRs into an LLR memory; and the 0 th sub-matrix and the 4 th sub-matrix sequentially read out corresponding variable node information from the overlapped sub-matrix buffer area, update the variable node information after adding check node information to obtain LLR, and bypass input the LLR into the barrel shifter.
After completion of LLR update, the case where the overlapping sub-matrix is input to the barrel shifter by bypass is as follows: in layer 0, the sub-matrices 1, 4 and 5 need to participate in decoding in layer 1, and the corresponding updated LLRs are input to the barrel shifter by-pass and are no longer read from the LLR memory. In layer 1, the 3 rd and 4 th sub-matrices also need to participate in decoding in layer 2, and the corresponding updated LLR is input into the barrel shifter by bypass and is not read out from the LLR memory. In layer 2, the 0 th and 4 th sub-matrices need to participate in decoding in the 0 th layer in the next iteration, and the corresponding updated LLRs are input into the barrel shifter by bypass and are not read out from the LLR memory any more.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A LDPC layered decoding method based on two-way LLR updating is characterized by comprising the following steps:
before decoding, preprocessing the processing sequence of the submatrices in the check matrix;
the variable node information obtained by calculation of the variable node processing unit is divided into two types according to whether the current layer and the next layer need to be processed or not, the variable node information corresponding to the sub-matrix which needs to be processed in the current layer and the next layer is stored in an overlapped sub-matrix buffer area or an adjusted sequence sub-matrix buffer area, and the rest is stored in a non-overlapped sub-matrix buffer area;
after the check node processing unit calculates the check node information in the current layer, sequentially reading data from the buffer areas of the overlapped submatrix and the non-overlapped submatrix respectively, and performing parallel updating through two adders to calculate and obtain the latest LLR;
if the overlapped submatrix has no data updating conflict, the updated LLR is directly input into the barrel shifter by a bypass way and directly participates in the operation of the next layer; thus, one round of decoding is completed.
2. The two-way LLR update-based LDPC layered decoding method of claim 1, wherein a processing order of the sub-matrices of a current layer is determined by overlapping sub-matrices and non-overlapping sub-matrices of a previous layer together, the non-overlapping sub-matrices are preferentially processed, and the overlapping sub-matrices are subsequently processed, and the non-overlapping sub-matrices and the overlapping sub-matrices are processed in descending order of sub-matrix index.
3. The LDPC layered decoding method based on two-way LLR updating as claimed in claim 1, wherein before decoding, the sub-matrices in the check matrix are preprocessed, specifically comprising:
the processing sequence of the submatrices in the check matrix is pre-ordered, the sequence of the submatrices is fixed, each round of iterative decoding is also set to be continuous, each layer of the submatrices is processed in sequence in the first iterative process, the first layer of the submatrices of the second iteration is processed after the last layer of the submatrices is processed in the first iterative process, the processing sequence of the first layer of the submatrices of the next iteration is determined by the last layer of the submatrices of the previous iteration, and the classification of the overlapped submatrices and the non-overlapped submatrices is also determined by the last layer of the submatrices.
4. The two-way LLR update-based LDPC layered decoding method as claimed in claim 1, wherein the preprocessed data, the data to be processed in the current layer and the next layer are stored in the overlapped sub-matrix buffer, and the rest are stored in the non-overlapped sub-matrix buffer, further comprising adjusting the sequential sub-matrix buffer to store the sub-matrices of which the previous layer is the non-overlapped sub-matrix and the current layer is the overlapped sub-matrix.
5. The method as claimed in claim 4, wherein the two adders running in parallel without interfering with each other comprise a first adder and a second adder, wherein, after the check node operation unit calculates the check node information, the non-overlapping submatrix updates the LLRs separately by the first adder, the overlapping submatrix selects and reads the submatrix information from the non-overlapping submatrix buffer area and the adjusting sequence submatrix buffer area, and the LLRs are updated separately by the second adder.
6. The LDPC layered decoding method according to claim 1, wherein during decoding, if there is no data update collision in the overlapped submatrices, the overlapped submatrices are buffered in the overlapped submatrix buffer or the adjustment sequence submatrix buffer, if there is data update collision, the overlapped submatrices with data update collision are not updated and are not stored in any buffer, and the rest overlapped submatrices are stored in the overlapped submatrix buffer or the adjustment sequence submatrix buffer according to the processing sequence.
7. The LDPC layered decoding method based on two-way LLR updating of claim 6, wherein when the overlapped submatrix in the current layer has no data updating conflict, the LLR is updated by the adder, the updated LLR is input into the barrel shifter by-pass, if a certain overlapped submatrix in the current layer has data updating conflict, the submatrix does not update LLR, and the LLR is read from the LLR memory during the decoding process of the next layer.
8. The LDPC layered decoding method based on two-way LLR updating of claim 1, wherein LLRs of the sub-matrices are calculated by a barrel shifter and a variable node processing unit to obtain variable node information.
9. The two-way LLR update-based LDPC layered decoding method of claim 1, further comprising determining a number of pipeline stages to be inserted before decoding, and designing a decoder according to the number of pipeline stages to be inserted.
10. A decoder for implementing the LDPC layered decoding method based on two-way LLR update as claimed in any one of claims 1 to 9, comprising a barrel shifter, a variable node processing unit, a non-overlapping sub-matrix buffer, an adjustment order sub-matrix buffer, a check node operation unit, a first way adder and a second way adder, wherein LLRs of sub-matrices are calculated by the barrel shifter and the variable node processing unit to obtain variable node information, then sub-matrices are buffered in the non-overlapping sub-matrix buffer if they do not participate in the operation in the next layer, and then buffered in the adjustment order sub-matrix buffer if they do participate in the operation in the next layer and the subscript of the sub-matrix is greater than that of the same kind of sub-matrix, and the remaining sub-matrices that need to participate in the operation processing in the next layer are sequentially buffered in the overlapping sub-matrix buffer, and after the check node operation unit calculates the minimum value and the second minimum value of the current layer, the overlapped sub-matrix and the non-overlapped sub-matrix are respectively updated through the two adders.
CN202111550108.4A 2021-12-17 2021-12-17 LDPC layered decoding method based on two-way LLR updating Pending CN114499540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111550108.4A CN114499540A (en) 2021-12-17 2021-12-17 LDPC layered decoding method based on two-way LLR updating

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111550108.4A CN114499540A (en) 2021-12-17 2021-12-17 LDPC layered decoding method based on two-way LLR updating

Publications (1)

Publication Number Publication Date
CN114499540A true CN114499540A (en) 2022-05-13

Family

ID=81494742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111550108.4A Pending CN114499540A (en) 2021-12-17 2021-12-17 LDPC layered decoding method based on two-way LLR updating

Country Status (1)

Country Link
CN (1) CN114499540A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115664584A (en) * 2022-07-25 2023-01-31 西安空间无线电技术研究所 High-energy-efficiency LDPC decoder for high-speed satellite link

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115664584A (en) * 2022-07-25 2023-01-31 西安空间无线电技术研究所 High-energy-efficiency LDPC decoder for high-speed satellite link
CN115664584B (en) * 2022-07-25 2024-04-09 西安空间无线电技术研究所 High-energy-efficiency LDPC decoder for high-speed satellite link

Similar Documents

Publication Publication Date Title
US11424762B2 (en) Decoder for low-density parity-check codes
CN110705687B (en) Convolution neural network hardware computing device and method
CN103384153B (en) Quasi-cyclic LDPC code coding method and system
CN101803210B (en) Method, apparatus and device providing semi-parallel low density parity check decoding using a block structured parity check matrix
CN101233693A (en) Encoder and decoder by LDPC encoding
US9413390B1 (en) High throughput low-density parity-check (LDPC) decoder via rescheduling
CN114499540A (en) LDPC layered decoding method based on two-way LLR updating
TW202205815A (en) Method and apparatus for vertical layered decoding of quasi-cyclic low-density parity check codes built from clusters of circulant permutation matrices
CN111144556A (en) Hardware circuit of range batch processing normalization algorithm for deep neural network training and reasoning
CN112734020A (en) Convolution multiplication accumulation hardware acceleration device, system and method of convolution neural network
CN105262493A (en) Decoding method of low-density parity check codes
CN111582465A (en) Convolutional neural network acceleration processing system and method based on FPGA and terminal
US10554226B2 (en) Method for controlling a check node of a NB-LDPC decoder and corresponding check node
CN109450456B (en) Self-adaptive stack decoding method and system based on polarization code
US11003448B2 (en) DSP slice configured to forward operands to associated DSP slices
CN110991609A (en) Line buffer for improving data transmission efficiency
CN116757260A (en) Training method and system for large pre-training model
US20190095783A1 (en) Deep learning apparatus for ann having pipeline architecture
Wu et al. Updating conflict solution for pipelined layered LDPC decoder
CN116401987A (en) Chip time sequence optimization method, system, equipment and medium
CN117795473A (en) Partial and managed and reconfigurable systolic flow architecture for in-memory computation
US10761847B2 (en) Linear feedback shift register for a reconfigurable logic unit
CN112187286A (en) Multi-mode LDPC decoder applied to CCSDS satellite deep space communication
CN112508174A (en) Pre-calculation column-by-column convolution calculation unit for weight binary neural network
CN110929854B (en) Data processing method and device and hardware accelerator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination