WO2020001212A1 - 译码器、译码方法和计算机存储介质 - Google Patents

译码器、译码方法和计算机存储介质 Download PDF

Info

Publication number
WO2020001212A1
WO2020001212A1 PCT/CN2019/088398 CN2019088398W WO2020001212A1 WO 2020001212 A1 WO2020001212 A1 WO 2020001212A1 CN 2019088398 W CN2019088398 W CN 2019088398W WO 2020001212 A1 WO2020001212 A1 WO 2020001212A1
Authority
WO
WIPO (PCT)
Prior art keywords
bit information
soft bit
variable node
decoding
unit
Prior art date
Application number
PCT/CN2019/088398
Other languages
English (en)
French (fr)
Inventor
张鹤
章伟
王红展
余金清
彭理健
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP19824821.3A priority Critical patent/EP3829088B1/en
Publication of WO2020001212A1 publication Critical patent/WO2020001212A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/005Iterative decoding, including iteration between signal detection and decoding operation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1131Scheduling of bit node or check node processing
    • H03M13/114Shuffled, staggered, layered or turbo decoding schedules
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1148Structural properties of the code parity-check or generator matrix
    • H03M13/116Quasi-cyclic LDPC [QC-LDPC] codes, i.e. the parity-check matrix being composed of permutation or circulant sub-matrices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0052Realisations of complexity reduction techniques, e.g. pipelining or use of look-up tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes

Definitions

  • This application relates to, but is not limited to, the technical field of decoders for quasi-cyclic low-density parity-check (QC-LDPC, QuasiCyclic Low-Density, Parity-Check) codes in wireless communications.
  • QC-LDPC quasi-cyclic low-density parity-check
  • QuasiCyclic Low-Density, Parity-Check quasi-cyclic low-density parity-check
  • LDPC low-density parity check
  • LDPC has been widely used in digital video broadcasting (DVB, Digital Video Broadcasting), wireless local area network (WLAN, Wireless Local Area Networks), global microwave interconnection access (WiMAX, Worldwide Interoperability for Microwave Access) and other communication systems, facing the future In the 5G mobile communication system, large-capacity, low-latency, high-reliability service requirements, and various application scenarios. Designing high-performance, low-cost, and flexible LDPC decoders has become a major technical challenge in this field.
  • the block error rate (BLER, BlockErrorRatio) performance must be at least 1E-3, and the throughput rate must be 10Gbps to 20Gbps.
  • the guarantee of BER performance Reliable and complex decoding algorithms.
  • the improvement of throughput performance requires low decoding delay.
  • Most of the current LDPC decoders in the industry use the Standard Layered-decoding method.
  • the decoding method using this method In the iterative decoding process, the memory access conflicts occur, which results in increased decoding delay, lower throughput, and difficulty in meeting low latency requirements. It can be seen that the related technology uses standard layering in LDPC decoding methods.
  • the decoding method has the technical problem of low decoding efficiency.
  • the embodiments of the present application are expected to provide a decoder, a method, and a computer storage medium.
  • an embodiment of the present application provides a decoder including at least one calculation unit, where the calculation unit is configured to: obtain soft bit information of a variable node n in a basic matrix of an LDPC code to be decoded; , N is an integer greater than or equal to 0 and less than the number of columns b of the basic matrix; determining whether the number of decoding iterations i of the LDPC code is less than a preset decoding iteration threshold; determining that the number of decoding iterations i is less than the When decoding the iteration threshold, determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix; when determining that the number of decoding layers k is less than the number of rows a of the basic matrix, according to the basic matrix
  • the soft bit information of the variable node n of the k-1th layer determines the soft bit information of the variable node n of the k-th layer of the basic matrix, and determines
  • variable node information of variable node n to check node m of the basic matrix at the (k + 1) th layer is described; the variable node information of variable node n to check node m of the k + 1th layer of the basic matrix is used to determine the first k + 1 layer check node m to variable node n check node information; update the number of decoding layers k to k + 1, and re-execute to determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix; determine that the number of decoding layers k is greater than Or equal to the number of rows a of the basic matrix, update the number of iterations i to i + 1, and re-execute to determine whether the number of decoding iterations i of the LDPC code is less than a preset decoding iteration threshold until the number of decoding iterations i is equal to the decoding iteration threshold, and the soft bit information of the k-th variable node
  • an embodiment of the present application further provides a decoding method, which is applied to at least one calculation unit of a decoder, and the method includes: obtaining a to-be-translated variable node n in a basic matrix of an LDPC code; Soft bit information of the code; where n is an integer greater than or equal to 0 and less than the number of columns b of the basic matrix; determining whether the number of decoding iterations i of the LDPC code is less than a preset decoding iteration threshold; determining the translation When the number of code iterations i is less than the decoding iteration threshold, determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix; determine that the number of decoding layers k is less than the number of rows a of the basic matrix When determining the soft bit information of the k-th layer variable node n of the base matrix according to the soft bit information of the k-1 layer variable node n of the
  • an embodiment of the present application further provides a computer storage medium, where the computer medium stores a computer program, and when the computer program is executed by a processor, the decoding is performed as described in one or more embodiments described above. Method steps.
  • the decoder includes at least one calculation unit configured to obtain a to-be-decoded variable node n in a basic matrix of an LDPC code. Soft bit information, and then determine whether the number of decoding iterations i of the LDPC code is less than the preset number of decoding iterations. If it is less, then determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix.
  • the soft bit information of the variable node n of the k-1 layer of the basic matrix determines the soft bit information of the variable node n of the k-th layer of the basic matrix, and the k-th of the basic matrix is determined based on the soft bit information of the variable node n of the k-1 layer of the basic matrix.
  • variable node information of the next layer is generated, and then the check node information of the next layer is generated.
  • the variable node information and check node information of the next layer are generated in advance to prepare for generating the soft bit information of the variable node n of the next layer, which can speed up the decoding speed and obtain the variable node n of the k-th layer.
  • FIG. 1 is a schematic diagram of an optional structure of a decoder in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an optional structure of a computing unit in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an optional arrangement of a basic matrix in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an optional storage format of a storage unit in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an optional structure of a storage unit according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an optional structure of a shift unit according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an optional structure of a routing unit according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an optional pipeline for hierarchical decoding according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an optional pipeline for layer overlap decoding in the embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a decoding method according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a computer storage medium in an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an optional structure of the decoder in the embodiment of the present application.
  • the decoder may include: a format conversion unit, a storage unit, and a decoder.
  • Shift unit, inverse shift unit, routing unit, pipeline control unit, and P computing units the numbers of P computing units are 0 to P-1 in sequence; where P is the parallelism of the decoder, and is positively shifted
  • the number of units is equal to the number of storage units, the number of inverse shift units is equal to the number of storage units; and the number of storage units is a positive integer greater than or equal to 2; wherein the above calculation unit is configured as:
  • n is an integer greater than or equal to 0 and less than the number of columns b of the basic matrix; determine whether the number of decoding iterations i of the LDPC code is less than Set the decoding iteration threshold; when the number of decoding iterations i is less than the decoding iteration threshold, determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix; determine the number of decoding layers k less than the number of rows of the basic matrix at a, determining the soft bit information of the variable node n of the k-th layer of the base matrix based on the soft bit information of the variable node n of the k-1 layer of the base matrix, and the soft bit information of the variable node n of the k-1 level of the base matrix, Determine the variable node information from the variable node n of the k + 1 layer of the basic
  • the soft bit information of variable node n of layer 1 is the soft bit information of the variable node n to be decoded.
  • QC-LDPC codes are generally represented by a basic matrix H (a ⁇ b) and Z-dimensional unit matrix and its cyclic shift matrix.
  • the basic matrix H (a ⁇ b) and Z-dimensional unit matrix and its cyclic shift matrix are used to represent it.
  • the shift matrix or the zero matrix is used to indicate that the extended matrix H (Z * a ⁇ Z * b) is obtained, which is a sparse parity check matrix of a QC-LDPC code.
  • the matrix includes Z * a check nodes and Z * b variable nodes.
  • Z is also called an expansion factor of the base matrix, and Z is a positive integer.
  • i is defined as the index of the number of iterations, which can be expressed from the first iteration to the maximum Max_iter iteration;
  • k is defined as the layer index of the basic matrix H (a ⁇ b), which can be expressed from the 0th layer to the a- 1 layer;
  • m is defined as the check line index in the first layer of the base matrix, which can indicate from row 1 to Z;
  • n is the variable node index of the k-th layer, and n 'is the variable node of the k + 1-th layer Index
  • n ⁇ N (m) represents the set of all variable node indexes n connected to the check node m, similarly n ′ ⁇ N (m), where 0 ⁇ n ⁇ b-1, 0 ⁇ n′ ⁇ b- 1;
  • definition A priori information for the k-th variable node n; definition Is the variable node information from variable node n to check node m in the kth layer; definition The old check
  • the calculation unit can obtain the soft bit information of the variable node n in the basic matrix of the LDPC code from the preprocessing unit of the decoder to be decoded. Then, before obtaining, the preprocessing unit must determine the generation of the variable node n.
  • the decoder further includes a preprocessing unit, the preprocessing unit and the calculation The units are connected.
  • the preprocessing unit determines the soft bit information of the variable node n to be decoded, which may include: obtaining the soft bits to be decoded Information; the soft bit information to be decoded is pre-processed to obtain the soft bit information of the variable node n to be decoded.
  • the above preprocessing may include format conversion, storage, shifting, and routing of the soft bit information to be decoded, so that the computing unit may obtain the soft bit information of the variable node n to be decoded.
  • the pre-processing unit may include a format conversion unit, a routing unit, at least one positive shift unit, and at least one storage unit; wherein, the above The format conversion unit is connected to the at least one storage unit, the at least one storage unit is connected to the at least one positive shift unit, the at least one positive shift unit is connected to the routing unit, and the routing unit is connected to the at least one calculation unit,
  • the pipeline control unit is respectively connected to the at least one positive shift unit and the at least one calculation unit, and the positive shift unit corresponds to the storage unit one by one; correspondingly, the preprocessing unit preprocesses the soft bit information to be decoded to obtain
  • the soft bit information to be decoded for the variable node n may include:
  • the format conversion unit blocks the soft bit information to be decoded to obtain at least one soft bit information block to be decoded, and sends the at least one soft bit information block to be decoded to a corresponding storage unit for storage; the storage unit receives the corresponding
  • the soft bit information blocks to be decoded are stored in groups; the pipeline control unit determines that the positive shift unit receives the soft bit information to be decoded, triggers the positive shift unit, and the positive shift unit performs the soft bit information to be decoded. Positive cyclic shift to obtain the shifted soft bit information to be decoded;
  • the routing unit outputs the shifted soft bit information to be decoded to at least one calculation unit according to a preset routing manner, so that the calculation unit obtains the soft bit information of the variable node n to be decoded.
  • the format conversion unit is configured to divide the soft bit information of the input codeword to be decoded with a length of Z * b into b soft bit information blocks, and each soft bit information block includes Z soft bit information. Corresponds to a Z-dimensional expansion sub-matrix.
  • the format conversion unit stores each soft bit information block in a preset format according to the parallelism P and Z values, and inputs the soft bit information block after the format conversion is completed to the storage unit.
  • the format conversion process can process the soft-bit information block in a ping-pong manner.
  • a storage unit configured to store soft bit information blocks to be decoded, each storage unit stores one or more soft bit information blocks, and can simultaneously provide P soft bit information, perform parallel calculation of P check lines, and store
  • the number of cells n v may be less than or equal to the number b of the basic matrix column according to the orthogonal characteristics of the basic matrix column.
  • the pipeline control unit is configured to determine that the positive shift unit receives the soft bit information to be decoded, trigger the positive shift unit, and the positive shift unit is configured to shift and rotate the soft bit information output from the storage unit and output the positive shift
  • the bit unit includes n v shift units, and each shift unit receives P soft bit information output from a corresponding memory unit, and completes a positive cyclic shift output of the P soft bit information according to a preset shift value.
  • the routing unit is mainly configured to transmit the P soft bit information output by each shift unit to a data receiving port of a variable node computing unit (VNU, Variable Node Unit) in the computing unit according to a fixed routing connection method, or is configured to transfer
  • VNU variable node computing unit
  • UPD Updata
  • the format conversion unit divides the soft bit information to be decoded to obtain at least one soft bit information to be decoded. And sending at least one soft bit information block to be decoded to a corresponding storage unit for storage may include:
  • the soft bit information to be decoded is divided into b soft bit information blocks to be decoded; wherein each soft bit information block to be decoded and the element value of the nth column of the basic matrix Correspondingly; from b soft-bit information blocks to be decoded, soft-bit information blocks to be decoded corresponding to columns with orthogonality in the basic matrix are stored in the same storage unit.
  • each soft-bit information block to be decoded corresponds to the element value of each column of the basic matrix, and it is determined which columns in the basic matrix have orthogonality between them.
  • the column with orthogonality refers to the non-"-1" element that is not at the same time between any two columns. Then, the non-"-1" element that is not at the same time between the two columns corresponds to the element to be decoded.
  • the soft bit information blocks are stored in the same storage unit.
  • Each storage unit can group the soft-bit information blocks to be decoded and then store them.
  • the storage unit receives the corresponding soft-bit information blocks to be decoded and performs packet storage, which can include:
  • the soft bit information block to be decoded is divided into Z soft bit information to be decoded, and the Z soft bit information to be decoded is grouped according to the number of each group as a preset decoding parallelism P, and Stored in a storage unit;
  • Z is an expansion factor of the base matrix, and Z is a positive integer.
  • each of the soft bit information blocks to be decoded is divided into Z soft bit information to be decoded, and then each soft bit information block to be decoded can be divided into Z / P groups according to each group of P , Stored in the storage unit.
  • the above-mentioned decoding parallelism P is the number of calculation units in the decoder.
  • the positive shift unit performs a positive cyclic shift on the soft bit information to be decoded to obtain the shifted soft bit information including: :
  • the shift value of the corresponding memory unit is determined; according to the shift value, the received soft bit information to be decoded is subjected to a positive cyclic shift to obtain the shifted to be decoded Soft bit information.
  • the above-mentioned element value of the k-th row determines the shift value of the corresponding storage unit, which can be implemented by a preset algorithm formula, so that the shift value of each storage unit can be flexibly set, and the positive shift unit is based on the corresponding storage unit.
  • the shift value performs a positive cyclic shift on the received soft bit information of the corresponding storage unit to be decoded, and then outputs the shifted soft bit information to be decoded according to a preset routing method to at least one calculation. Unit, so that the soft bit information output by each shift unit is replaced with a corresponding check node for calculation.
  • the calculation unit After obtaining the soft-bit information of the variable node n in the basic matrix of the LDPC code to be decoded, since the calculation is performed using an iterative method in the embodiment of the present application, the calculation unit first determines whether the number of decoding iterations i of the LDPC code is i If it is less than the preset decoding iteration threshold, if it is less, it means that the iteration is not completed, if it is equal, it means that the iteration is over.
  • the computing unit determines that the number of decoding iterations i is less than the decoding iteration threshold, it means that the iteration has not been completed. Since the decoding is performed in layers in the embodiment of the present application, it is also necessary to determine whether the number of decoding layers k of the LDPC code is Less than the number of rows a of the underlying matrix.
  • the computing unit determines the soft bit information of the variable node n of the k-th layer of the base matrix based on the soft bit information of the variable node n of the k-1 layer of the base matrix, and The soft bit information of the variable node n of the k-1 layer of the basic matrix is determined from the variable node information of the variable node n of the k + 1 layer of the basic matrix to the check node m. Specifically, if it is determined that the number of decoding layers k is less than the basic When the number of rows of the matrix is a, it means that the decoding has not been completed.
  • the soft bit information of the variable node n of the k-th layer of the basic matrix when determining the soft bit information of the variable node n of the k-th layer of the basic matrix, it can be based on the k-1 layer of the known base matrix variable at the same time.
  • the soft bit information of the node n determines the variable node information from the variable node n of the k + 1th layer of the basic matrix to the check node m.
  • the k-th variable of the basic matrix is determined according to the soft bit information of the variable node n of the k-1th layer of the basic matrix.
  • the soft bit information of the node n, and determining the variable node information of the variable node n of the k + 1th layer of the basic matrix to the check node m according to the soft bit information of the variable node n of the k-1 layer of the basic matrix may include: The soft bit information of the variable node n of the k-1th layer of the matrix, minus the test node information from the variable node n of the kth layer of the basic matrix to the check node m in the previous iteration, plus the variable node n of the kth layer of the basic matrix to the school
  • the test node information of the test node m is determined as the soft bit information of the variable node n of the k-th layer of the base matrix.
  • the soft bit information of the variable node n of the k-th layer of the basic matrix can be calculated by the following formula:
  • Soft bit information representing variable node n at the k-th level Represents the test node information from the k-th variable node n of the base matrix to the check node m in the previous iteration, Represents the check node information from the variable node n of the k-th layer of the base matrix to the check node m.
  • n N (m).
  • variable node information from the variable node n to the check node m of the k + 1 layer of the basic matrix can be calculated by the following formula:
  • Variable node information from variable node n to check node m of the base matrix at the k + 1th level Represents soft bit information of variable node n of the k-1 layer of the base matrix, Represents the check node information of the k + 1-th variable node n to the check node m of the basic matrix in the previous iteration.
  • the check node information of the check node m to the variable node n of the k + 1 layer of the basic matrix is determined, and the kth node is calculated.
  • the check node information of the k + 1th layer of the basic matrix check node m to the variable node n can be calculated by the following formula:
  • the function f can use the minimum sum (MS, Min-Sum) algorithm, the normalized minimum sum (NMS, Normalised Offset) algorithm, One of the Offset Min-Sum (OMS) algorithms.
  • the input information of the function f is all the variable node information connected to the k + 1th check node m except the variable node n. Where j ⁇ N (m), j ⁇ n.
  • the computing unit updates the number of decoding layers k to k + 1, and re-executes to determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix, and determines that when the number of decoding layers k is greater than or equal to the number of rows a of the basic matrix , Update the number of iterations i to i + 1, and re-execute whether the number of decoding iterations i of the LDPC code is less than a preset decoding iteration threshold;
  • the number of decoding layers k is updated to k + 1, and it is returned to determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix, and if k is greater than or equal to the number of rows of the basic matrix a.
  • the iteration is completed, the number of iterations i is updated to i + 1, and S102 is performed until the number of decoding iterations i is equal to the decoding iteration threshold.
  • the number of decoding iterations i of the computing unit is equal to the decoding iteration threshold, and the soft bit information of the k-th layer variable node n is determined as the decoding result of the soft node information of the variable node n to be decoded.
  • the iteration result is recorded as the decoding result, that is, the soft bit information of the k-th layer variable node n is determined as the variable node n. Decoding result of the soft bit information to be decoded.
  • the decoder further includes at least one Inverse shift unit, at least one inverse shift unit corresponds to the storage unit one by one; after the number of decoding iterations i is equal to the decoding iteration threshold, the method further includes:
  • the routing unit receives the decoding result from at least one computing unit, and outputs the decoding result to at least one inverse shift unit according to a preset routing method; the pipeline control unit determines that the inverse shift unit receives the decoding result and triggers the inverse shift.
  • the bit unit and the inverse shift unit output the decoded result to the corresponding storage unit in a reverse cyclic shift according to the shift value.
  • the above-mentioned flexibly sets a shift value of each storage unit, and the inverse shift unit reversely shifts the soft bit information of the received variable node n according to the shift value of the corresponding storage unit according to the shift value of the corresponding storage unit.
  • the bit output is stored in the corresponding memory cell.
  • FIG. 2 is a schematic diagram of an optional structure of a calculation unit in the embodiment of the present application.
  • the calculation unit is configured to complete calculation of variable node information of a check line, calculation of check node information, and check.
  • the node information is stored and the variable bit soft bit information is updated and calculated.
  • the P computing units complete the parallel calculation of the P check rows in parallel.
  • Each computing unit includes VNU, Check Node Computing Unit (CNU, Check Node Unit), UPD, and Check Node Information Storage Unit (CRAM, Check Random-Access Memory).
  • VNU configured to complete the calculation of variable node information.
  • Each VNU simultaneously receives the check node information of all the variable nodes connected to the check node and outputs the information of all the variable nodes connected to the check node. Calculation, that is, the variable node information of the same check line is calculated in parallel.
  • CNU configured to complete the calculation of check node information.
  • Each CNU receives the variable node information output by VNU at the same time, and completes the calculation of the check node information of the check node to all variable nodes connected to it, that is, the same check line. All check node information is calculated in parallel.
  • CNU outputs the calculated check node information to CRAM for update storage, and at the same time to UPD for update calculation.
  • CRAM configured to complete the update and storage of check node information, output the check node information obtained from the previous iteration calculation for VNU to complete the calculation of variable node information, receive the check node information output by CNU during the current iteration, and update storage.
  • UPD configured to complete the calculation of updating the soft bit information of a variable node.
  • FIG. 3 is a schematic diagram of an optional arrangement of the basic matrix in the embodiment of the present application.
  • the basic matrix H in this example is shown in FIG. 3, and the basic matrix is H (5 ⁇ 9) ,
  • the expansion factor Z 16 that is, the base matrix H is composed of 5 ⁇ 9 16 ⁇ 16 cyclic shift sub-matrices, and non-“-1” elements in the matrix represent the shift values of the cyclic shift sub-matrix, “ The sub-matrix at the position of "-1" is the zero matrix;
  • the basic matrix includes 5 layers, namely layer0, layer1, layer2, layer3, and layer4, each layer has 16 check lines, and the 16 check lines in each layer use parts Parallel computing mode.
  • the partial parallelism P 4 used in this example.
  • the embodiment of the present application does not specifically limit the partial parallelism P.
  • the first group includes: c0, c4, c8, c12;
  • the second group includes: c1, c5, c9, c13;
  • the third group includes: c2, c6, c10, c14;
  • the fourth group includes: c3, c7, c11, c15.
  • the four check nodes in each group are processed in parallel, and each group is processed serially.
  • FIG. 4 is a schematic diagram of an optional storage format of a storage unit in the embodiment of the present application.
  • each soft bit information block is converted according to the parallelism P and the expansion factor Z, and the corresponding storage format is converted.
  • the 16 soft bit information of each soft bit information block: LLR0 to LLR15 are divided into 4 groups, and each group is stored in the addresses Addr0, Addr1, Addr2, Addr3 of RAM0, RAM1, RAM2, and RAM3, respectively, among which the first group Including: LLR0, LLR4, LLR8, LLR12; the second group includes: LLR1, LLR5, LLR9, LLR13; the third group includes: LLR2, LLR6, LLR10, LLR14; the fourth group includes: LLR3, LLR7, LLR11, LLR15.
  • each group of soft-bit information can be stored in a "spiral" -like manner.
  • the write address can be obtained by "lookup table” or calculation.
  • the codeword to be decoded in this example includes a total of 9 soft-bit information blocks.
  • the soft-bit information storage unit receives the soft-bit information blocks output by the format conversion unit, and each soft-bit information block is based on the column orthogonal characteristics of the base matrix H. , Stored separately or collectively in a storage unit.
  • the columns corresponding to the variable nodes VN0, VN1, VN2, and VN3 of the basic matrix H are not orthogonal to each other, and the corresponding soft bit information blocks are stored in different storage units, respectively.
  • the variable nodes VN4, VN5, VN6, and VN7 The columns corresponding to VN8 are orthogonal to each other, and the corresponding soft bit information blocks are stored together in the same storage unit.
  • the orthogonality of the columns refers to non-"-1" elements when the same position between any two columns is different.
  • FIG. 5 is a schematic diagram of an optional structure of a storage unit in the embodiment of the present application.
  • the storage bit width of each storage unit is equal to 4 times the soft bit information bit width, storage unit 0, and storage variable node VN0.
  • Storage unit 1 stores the soft bit information block of the variable node VN1;
  • storage unit 2 stores the soft bit information block of the variable node VN2;
  • storage unit 3 stores the soft bit information block of the variable node VN3; storage unit 4.
  • the starting storage addresses of each soft-bit information block are Addr0, Addr4, Addr8, Addr12, and Addr16. Node information uses a common storage unit. This storage method reduces the storage resource consumption of the decoder.
  • the first calculation of Layer0 corresponds to the first set of check lines c0, c4, c8, and c12.
  • the initial read address of storage unit 0 can be obtained according to the following formula:
  • Init_addr0 mod (H 0,0 , P) (4)
  • the storage unit 1 outputs the soft bit information ⁇ VN1LLR10, VN1LLR14, VN1LLR2, VN1LLR6 ⁇ stored at the address Addr2;
  • the storage unit 2 outputs the soft bit information ⁇ VN2LLR13, VN2LLR1, VN2LLR5, VN2LLR9 ⁇ stored at the address Addr1;
  • the storage unit 3 outputs the soft bit information ⁇ VN3LLR7, VN3LLR11, VN3LLR15, VN3LLR3 ⁇ stored at the address Addr3;
  • the storage unit 4 outputs the soft bit information ⁇ VN4LLR0, VN4LLR4, VN4LLR8, VN4LLR12 ⁇ stored at the address Addr0.
  • FIG. 6 is a schematic diagram of an optional structure of a shift unit according to an embodiment of the present application. As shown in FIG. 6, it includes five positive shift units (shift unit 0, shift unit 1, shift unit 2, shift unit). Bit unit 3, shift unit 4), each positive shift unit corresponds to a storage unit, and receives 4 soft bit information calculated by the storage unit in parallel each time, and completes the 4 soft bit information according to a preset shift value Rotate operation.
  • FIG. 7 is a schematic diagram of an optional structure of a routing unit in an embodiment of the present application. As shown in FIG. 7, the soft bit information output by each positive shift unit is replaced with a corresponding check node by a fixed connection. Calculation.
  • the common storage unit is used for the variable node information orthogonal to the column, the number of storage units is reduced, because the number of positive shift units is equal to the number of storage units, and the number of shift units is reduced, which reduces the shift.
  • the hardware resources of the unit are consumed, and the connection complexity of the shift network is also greatly reduced.
  • the information replacement method of the variable node and the check node during the first calculation of Layer0 is as follows:
  • VN0LLR4, VN1LLR6, VN2LLR5, VN3LLR11 and VN4LLR0 participate in the calculation of the check line c0;
  • VN0LLR8, VN1LLR10, VN2LLR9, VN3LLR15 and VN4LLR4 participate in the calculation of check line c4;
  • VN0LLR12, VN1LLR14, VN2LLR13, VN3LLR3 and VN4LLR8 participate in the calculation of check line c8;
  • VN0LLR0, VN1LLR2, VN2LLR1, VN3LLR7, and VN4LLR12 participate in the calculation of the check line c12;
  • the second calculation of Layer0 corresponds to the second set of check lines c1, c5, c9, and c13.
  • the read method of soft bit information is as follows:
  • the storage unit 0 outputs the soft bit information ⁇ VN1LLR13, VN1LLR1, VN1LLR5, VN1LLR9 ⁇ stored at the address Addr1;
  • the storage unit 1 outputs the soft bit information ⁇ VN1LLR7, VN1LLR11, VN1LLR15, VN1LLR3 ⁇ stored at the address Addr3;
  • the storage unit 2 outputs the soft bit information ⁇ VN2LLR10, VN2LLR14, VN2LLR2, VN2LLR6 ⁇ stored at the address Addr2;
  • the storage unit 3 outputs the soft bit information ⁇ VN3LLR0, VN3LLR4, VN3LLR8, VN3LLR12 ⁇ stored at the address Addr0;
  • the storage unit 4 outputs the soft bit information ⁇ VN4LLR13, VN4LLR1, VN4LLR5, VN4LLR9 ⁇ stored at the address Addr1.
  • the information replacement method of the variable node and check node for the second calculation of Layer0 is as follows:
  • VN0LLR5, VN1LLR7, VN2LLR6, VN3LLR12 and VN4LLR1 participate in the calculation of the check line c1;
  • VN0LLR9, VN1LLR11, VN2LLR10, VN3LLR0 and VN4LLR5 participate in the calculation of check line c5;
  • VN0LLR13, VN1LLR15, VN2LLR14, VN3LLR4 and VN4LLR9 participate in the calculation of check line c9;
  • VN0LLR1, VN1LLR3, VN2LLR2, VN3LLR8 and VN4LLR13 participate in the calculation of check line c13;
  • the third calculation of Layer0 corresponds to the third set of check lines c2, c6, c10, and c14.
  • the soft bit information is read as follows:
  • the storage unit 0 outputs the soft bit information ⁇ VN1LLR10, VN1LLR14, VN1LLR2, VN1LLR6 ⁇ stored at the address Addr2;
  • the storage unit 1 outputs the soft bit information ⁇ VN1LLR0, VN1LLR4, VN1LLR8, VN1LLR12 ⁇ stored at the address Addr0;
  • the storage unit 2 outputs the soft bit information ⁇ VN2LLR7, VN2LLR11, VN2LLR15, VN2LLR3 ⁇ stored at the address Addr3;
  • the storage unit 3 outputs the soft bit information ⁇ VN3LLR13, VN3LLR1, VN3LLR5, VN3LLR9 ⁇ stored at the address Addr1;
  • the storage unit 4 outputs the soft bit information ⁇ VN4LLR10, VN4LLR14, VN4LLR2, VN4LLR6 ⁇ stored at the address Addr2.
  • the information replacement method of the variable node and check node for the third calculation of Layer0 is as follows:
  • VN0LLR6, VN1LLR8, VN2LLR7, VN3LLR13, and VN4LLR2 participate in the calculation of check line c2;
  • VN0LLR10, VN1LLR12, VN2LLR11, VN3LLR1 and VN4LLR6 participate in the calculation of check line c6;
  • VN0LLR14, VN1LLR0, VN2LLR15, VN3LLR5, and VN4LLR10 participate in the calculation of check line c10;
  • VN0LLR2, VN1LLR4, VN2LLR3, VN3LLR9, and VN4LLR14 participate in the calculation of check line c14;
  • the fourth calculation of Layer0 corresponds to the third set of check lines c3, c7, c11, and c15.
  • the soft bit information is read as follows:
  • the storage unit 0 outputs the soft bit information ⁇ VN1LLR7, VN1LLR11, VN1LLR15, VN1LLR3 ⁇ stored at the address Addr3;
  • the storage unit 1 outputs the soft bit information ⁇ VN1LLR13, VN1LLR1, VN1LLR5, VN1LLR9 ⁇ stored at the address Addr1;
  • the storage unit 2 outputs the soft bit information ⁇ VN2LLR0, VN2LLR4, VN2LLR8, VN2LLR12 ⁇ stored at the address Addr0;
  • the storage unit 3 outputs the soft bit information ⁇ VN3LLR10, VN3LLR14, VN3LLR2, VN3LLR6 ⁇ stored at the address Addr2;
  • the storage unit 4 outputs the soft bit information ⁇ VN4LLR7, VN4LLR11, VN4LLR15, VN4LLR3 ⁇ stored at the address Addr3.
  • the information replacement method of the variable node and check node of the fourth calculation of Layer0 is as follows:
  • VN0LLR7, VN1LLR9, VN2LLR8, VN3LLR14 and VN4LLR3 participate in the calculation of check line c3;
  • VN0LLR11, VN1LLR13, VN2LLR12, VN3LLR2 and VN4LLR7 participate in the calculation of check line c7;
  • VN0LLR15, VN1LLR1, VN2LLR0, VN3LLR6 and VN4LLR11 participate in the calculation of the check line c11;
  • VN0LLR3, VN1LLR5, VN2LLR4, VN3LLR10, and VN4LLR15 participate in the calculation of check line c15;
  • One calculation unit completes the calculation of one check line.
  • the decoder of this example needs 4 parallel calculation units to complete the calculation of 4 check lines in parallel each time, and complete the calculation of 16 check lines in one layer in 4 times. Calculation.
  • VNU, CNU, and UPD in each calculation unit use a pipeline method to complete the calculation of all check rows in a layer, and a layered pipeline method is used between layers.
  • FIG. 8 is a layered method in the embodiment of the present application.
  • An optional pipeline diagram for decoding As shown in Figure 8, the calculation of the kth layer is based on the calculation results of the k-1th layer.
  • the 4 check line data for the first calculation of each layer are first input in parallel. Calculate to the corresponding calculation unit.
  • the first VNU calculation process is recorded as VNU0; the first CNU calculation process is recorded as CNU0; the first UPD calculation process is recorded as UPD0; and so on, when the fourth time After the calculated VNU4 process, CNU4 process and UPD4 process are all completed, the layer flow calculation is ended and the next layer flow is started.
  • VNU, CNU, and UPD calculation units all have long pipeline idle time, which leads to lower pipeline efficiency.
  • one VNU pipeline calculation delay is T vnu and one CNU pipeline.
  • the calculation delay is T cnu
  • the calculation delay of one UPD pipeline is T upd
  • the interval between each calculation in a single layer is unit time 1. Then, in this embodiment, 5 iterations are required for one iteration, and one iteration for decoding Extended to:
  • T iter (T vnu + T cnu + T upd +4) ⁇ 5 (5)
  • the pipeline control unit outputs an enable signal, controls the pipeline process in which the positive shift unit reads soft bit information from the storage unit for shift replacement, controls the pipeline process in which the VNU computing unit performs variable node information calculation, and controls the CNU computing unit in performing the check node.
  • the pipeline process of information calculation controls the pipeline process of the UPD calculation unit to update the soft bit information, and controls the pipeline process of the inverse shift unit to shift and replace the new soft bit information and update the storage unit.
  • the decoder in this example uses a layered pipeline mode to implement the standard layered decoding process for QC-LDPC codes.
  • the parallelism of the entire decoder is 4, which requires 4 computing units to complete 4 check lines at the same time.
  • Parallel computing shared storage units for soft bit information of variable nodes in orthogonal columns, reducing the number of storage units from 9 to 5, while reducing the number of forward / reverse shift units from 9 to 5, due to shift
  • the number of bit units is reduced, and the complexity of the routing unit between the shift unit and the calculation unit is also greatly reduced. Therefore, it can be seen that the storage mode of the decoder in the embodiment of the present application can reduce the storage resource consumption and hardware of the decoder. Implementation complexity.
  • the decoder in the embodiment of the present application adopts a partial parallel decoding method with a fixed parallelism P, and stores the soft-bit information block in a "spiral" manner. It does not increase the complexity of the decoder in advance. , Can decode QC-LDPC codes with different expansion factors Z, with high decoding flexibility.
  • FIG. 9 is a schematic diagram of an optional pipeline for layer overlapping decoding in the embodiment of the present application.
  • the decoding method can also be applied to the above decoder.
  • the same method as the above layered decoding method is used.
  • This example uses the fast layered decoding mode. For simplicity, this example Only the content different from the hierarchical decoding method will be described.
  • One calculation unit completes the calculation of one check line.
  • the decoder of this example needs 4 parallel calculation units to complete the calculation of 4 check lines in parallel each time, and complete the calculation of 16 check lines in one layer in 4 times.
  • VNU, CNU, and UPD in each calculation unit use a pipelined method to complete the calculation of all check rows in a layer.
  • the layer-to-layer flow method is used.
  • the schematic diagram of the layered flow is shown in Figure 9. Show.
  • the VNU calculation, CNU calculation, and UPD calculation of the k-th layer all depend on the soft-bit information calculation result of the k-2 layer. Therefore, it is not necessary to wait for the last UPD pipeline calculation of the k-1 layer to be completed, and the VNU pipeline calculation of the k layer can be started in advance; in this example, the VNU0 of the first layer can be started after the VNU3 pipeline calculation of the 0 layer is started.
  • VNU3 flow calculation of the first layer starts, the VNU0 flow calculation of the second layer can be started; and so on, after the VNU3 flow calculation of the third layer starts, the VNU0 flow of the fourth layer can be started Calculate; until the calculation of the UPD3 pipeline in the fourth layer is completed, end this iterative calculation.
  • the layer-overlap flow method is adopted, and the flow calculation between layers is continuously performed, which can ensure that the VNU, CNU, and UPD calculation units have no idle waiting time during the entire iterative calculation process, and the pipeline efficiency is high.
  • T vnu one CNU pipeline calculation delay is T cnu
  • one UPD pipeline calculation delay is T upd
  • each calculation interval in a single layer is unit time 1. Then, in this embodiment, a total of 5 layers are required for one iteration.
  • Each layer has 4 pipelines, and the decoding delay of one iteration is:
  • T iter T vnu + T cnu + T upd + 4 ⁇ 5 (6)
  • the pipeline control unit outputs an enable signal, controls the pipeline process in which the positive shift unit reads soft bit information from the storage unit for shift replacement, controls the pipeline process in which the VNU computing unit performs variable node information calculation, and controls the CNU computing unit in performing the check node.
  • the pipeline process of information calculation controls the pipeline process of the UPD calculation unit to update the soft bit information, and controls the pipeline process of the inverse shift unit to shift the new soft bit information and update the soft bit information storage unit. It can be seen that the pipeline control unit, It is configured to complete the pipeline control during the iterative process of decoding.
  • the layered pipeline is adopted in the standard layered decoding mode and the layered overlapped pipeline is used in the fast layered decoding mode. This is achieved by generating the enable signals at all levels of the pipeline. Switching function of pipeline mode.
  • the decoder in this example uses the layer overlap pipeline mode to realize the fast hierarchical decoding process of QC-LDPC codes.
  • the parallelism of the entire decoder is 4, which requires 4 computing units to complete 4 check lines at the same time.
  • Parallel calculation of variable bit soft bit information storage units for orthogonal columns reducing the number of variable node soft bit information storage units from 9 to 5, while reducing the number of forward / reverse shift units from 9 To five, because the number of shifting units is reduced, the complexity of the routing unit between the shifting unit and the computing unit is also greatly reduced. It can be seen that, by using the storage method of the decoder in the embodiment of the present application, the decoder can be reduced. Memory resource consumption and hardware implementation complexity.
  • the decoder in the embodiment of the present application uses a partial parallel decoding method with a fixed degree of parallelism P.
  • the soft bit information block is stored in a "spiral" manner without increasing decoding. It can decode QC-LDPC codes with different spreading factors Z on the premise of device complexity, and has high decoding flexibility.
  • the decoder includes at least one calculation unit configured to obtain soft bit information of a variable node n in a basic matrix of an LDPC code to be decoded, and then determine Whether the number of decoding iterations i of the LDPC code is less than the preset number of decoding iterations. If it is less, then determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the basic matrix.
  • the soft bit information of the layer variable node n determines the soft bit information of the k-th layer variable node n of the basic matrix, and the k + 1 layer variable node n of the base matrix is determined based on the soft bit information of the k-1 layer variable node n of the basic matrix.
  • Check the variable node information of the check node m and then determine the calibration of the k + 1 layer check node m to the variable node n of the basic matrix based on the variable node information of the k + 1 layer variable node n to the check node m of the basic matrix. Check the node information.
  • variable node information of the next layer is generated, and then the test node information of the next layer is generated.
  • the following is generated in advance.
  • layer The variable node information and check node information are prepared to generate the soft bit information of the variable node n of the next layer, which can speed up the decoding speed.
  • update k to k + 1 After obtaining the soft bit information of the variable node n of the k-th layer, update k to k + 1, perform the next layer decoding, until layer a decoding is completed, update the number of iterations i to i + 1, and iterate until the number of iterations equals the preset number of iterations to obtain soft bit information, that is, In the embodiment of the present application, at the same time that the soft bit information of the variable node n of the kth layer of the basic matrix is generated, the variable node information of the next layer is started and generated, and then the test node information of the next layer is generated.
  • the soft bit information of the variable node n of the k + 1 layer of the basic matrix can be directly determined based on the soft bit information of the variable node n of the k layer.
  • the calculation of the variable node information of the k + 1 layer and the check node information of the k + 1 layer can be started, which accelerates the decoding speed, and Improve LDPC translation It's coding efficiency.
  • FIG. 10 is a schematic flowchart of the decoding method in the embodiment of the present application. As shown in FIG. 10, the method is applied to at least one calculation of a decoder.
  • the above decoding method may include:
  • S1002 Determine whether the number of decoding iterations i of the LDPC code is less than a preset decoding iteration threshold
  • S1005 Determine the check node information from the check node m to the variable node n of the k + 1 layer of the base matrix according to the variable node information of the variable node n to the check node m of the basic matrix;
  • S1006 Update the number of decoding layers k to k + 1, and execute again to determine whether the number of decoding layers k of the LDPC code is less than the number of rows a of the base matrix;
  • the number of decoding iterations i is equal to the decoding iteration threshold, and the soft bit information of the k-th layer variable node n is determined as the decoding result of the soft bit information of the variable node n to be decoded;
  • the soft bit information of the variable node n of the k-1th layer is the soft bit to be decoded of the variable node n.
  • Information where n is an integer greater than or equal to 0 and less than the number of columns b of the underlying matrix.
  • the computing unit determines variable node information from the variable node n of the k + 1th layer of the basic matrix to the check node m based on the soft bit information of the variable node n of the k-1 layer of the basic matrix.
  • the soft bit information of the variable node n of the k-1 layer of the basic matrix is subtracted from the test node information of the variable node n of the k + 1 layer of the basic matrix to the check node m in the previous iteration, and the difference obtained is determined as the basic matrix.
  • Soft bit information of the k-th variable node n is determined as the basic matrix.
  • FIG. 11 is a schematic structural diagram of a computer storage medium in the embodiment of the present application.
  • the computer storage medium 110 stores a computer program. When executed by a processor, implements the steps of a decoding method as described in one or more embodiments described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Error Detection And Correction (AREA)

Abstract

本申请实施例公开了一种译码器,译码器的至少一个计算单元配置为:获取基础矩阵中变量节点n的待译码的软比特信息;判断译码迭代次数i是否小于译码迭代阈值;确定i小于译码迭代阈值时,判断译码层数k是否小于基础矩阵的行数a;确定k小于a时,根据第k-1层变量节点n的软比特信息,确定第k层变量节点n的软比特信息,且根据第k-1层变量节点n的软比特信息,确定第k+1层变量节点信息;根据第k+1层变量节点信息,确定第k+1层校验节点信息;将k更新为k+1,重新执行判断k是否小于a;确定k大于或等于a时,将i更新为i+1,重新执行判断i是否小于译码迭代阈值,直至i等于译码迭代阈值。本申请实施例还同时公开了一种译码方法和计算机存储介质。

Description

译码器、译码方法和计算机存储介质
相关申请的交叉引用
本申请基于申请号为201810717244.X、申请日为2018年06月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及但不限于无线通信中的准循环低密度奇偶校验(QC-LDPC,QuasiCyclic Low-Density Parity-Check)码的译码器技术领域。
背景技术
随着多媒体广播、无线通信、超大规模集成电路(VLSI,Very Large Scale Integration)技术的不断发展,低密度奇偶校验(LDPC)码作为最接近香农极限的前向纠错码(FEC,Forward Error Correction),被选为未来第五代移动通信技术(5G,5th-Generation)中增强移动宽带(eMBB,Enhance Mobile Broadband)业务的数据信道编码方案。
LDPC已经被广泛应用在数字视频广播(DVB,Digital Video Broadcasting)、无线局域网(WLAN,Wireless Local Area Networks)、全球微波互联接入(WiMAX,Worldwide Interoperability for Microwave Access)等通信系统中,面临着未来5G移动通信系统中大容量、低时延、高可靠的业务需求,以及各种不同的应用场景需求,设计高性能、低成本、灵活的LDPC译码器成为该领域主要的技术挑战。
为了满足LDPC译码器有较高的误码率性能和吞吐率性能,要求块差错率(BLER,BlockErrorRatio)性能至少达到1E-3以下,吞吐率达10Gbps~20Gbps,误码率性能的保证需要可靠、复杂的译码算法,吞吐率性能的提高需要很低的译码延时;业界目前的LDPC译码器大多采用标准分 层译码(Standard Layered-decoding)方法,采用该方法的译码器在迭代译码过程中存在存储器访问冲突问题,导致译码时延增加,吞吐率较低,难以满足低时延需求;由此可以看出,相关技术的LDPC译码方法中采用标准分层译码方法时存在译码效率低下的技术问题。
发明内容
有鉴于此,本申请实施例期望提供一种译码器、方法和计算机存储介质。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供一种译码器,包括至少一个计算单元,其中,所述计算单元配置为:获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于所述基础矩阵的列数b的整数;判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;确定译码迭代次数i小于所述译码迭代阈值时,判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;确定译码层数k小于所述基础矩阵的行数a时,根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k层变量节点n的软比特信息,且根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;根据所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定所述基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;将译码层数k更新为k+1,重新执行判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;确定译码层数k大于或等于所述基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,直至译码迭代次数i等于所述译码迭代阈值,将所述第k层变量节点n的软比特信息确定为所述变量节点n的待译码的软比特信息的译码结果;其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为所述变量节点n的待译码的软比特信息。
第二方面,本申请实施例还提供一种译码方法,所述方法应用于一译 码器的至少一个计算单元中,所述方法包括:获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于所述基础矩阵的列数b的整数;判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;确定译码迭代次数i小于所述译码迭代阈值时,判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;确定译码层数k小于所述基础矩阵的行数a时,根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k层变量节点n的软比特信息,且根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;根据所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定所述基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;将译码层数k更新为k+1,重新执行判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;确定译码层数k大于或等于所述基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,直至译码迭代次数i等于所述译码迭代阈值,将所述第k层变量节点n的软比特信息确定为所述变量节点n的待译码的软比特信息的译码结果;其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为所述变量节点n的待译码的软比特信息。
第三方面,本申请实施例中还提供一种计算机存储介质,所述计算机介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述一个或多个实施例中所述的译码方法的步骤。
本申请实施例所提供的译码器、方法和计算机存储介质,首先,该译码器包括至少一个计算单元,该计算单元配置为:获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息,然后判断LDPC码的译码迭代次数i是否小于预设的译码迭代次数,若小于,再判断LDPC码的译码层数k是否小于基础矩阵的行数a,若小于,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,再根据基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定基础矩阵第k+1层校验节点m到变量节点 n的校验节点信息,这样,在得到第k层变量节点n的软比特信息的同时,就生成下一层的变量节点信息,进而生成下一层的检验节点信息,与现有技术相比,提前生成下一层的变量节点信息和校验节点信息,为生成下一层变量节点n的软比特信息做好准备,从而可以加快译码速度,在得到第k层变量节点n的软比特信息,将k更新为k+1,进行下一层译码,直至完成a层译码,将迭代次数i更新为i+1,进行迭代,直至迭代次数等于预设的迭代次数,从而得到软比特信息,也就是说,在本申请实施例中,基础矩阵第k层变量节点n软比特信息生成的同时,就启动并生成下一层的变量节点信息,进而生成下一层的检验节点信息,那么,在此基础上,在进行第k+1层译码时,就可以直接根据第k层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n的软比特信息,与相关技术相比,不需要等待第k层的变量节点n的软比特信息生成之后,就可以启动第k+1层的变量节点信息和第k+1层的检验节点信息的计算,加快了译码的速度,进而提高LDPC译码的译码效率。
附图说明
图1为本申请实施例中译码器的一种可选的结构示意图;
图2为本申请实施例中计算单元的一种可选的结构示意图;
图3为本申请实施例中基础矩阵的一种可选的排布示意图;
图4为本申请实施例中存储单元的一种可选的存储格式示意图;
图5为本申请实施例中存储单元的一种可选的结构示意图;
图6为本申请实施例中移位单元的一种可选的结构示意图;
图7为本申请实施例中路由单元的一种可选的结构示意图;
图8为本申请实施例中分层译码的一种可选的流水线示意图;
图9为本申请实施例中层交叠译码的一种可选的流水线示意图;
图10为本申请实施例中译码方法的流程示意图;
图11为本申请实施例中计算机存储介质的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
相关标准分层译码方法在每次迭代计算时需要先完成当前层的软比特信息更新,然后才能进行下一层的变量节点信息的计算,这样会影响LDPC译码的译码速度;本申请实施例提供一种译码器,图1为本申请实施例中译码器的一种可选的结构示意图,参考图1所示,该译码器可以包括:格式转换单元、存储单元、正移位单元、逆移位单元、路由单元、流水线控制单元和P个计算单元,P个计算单元的编号依次为0至P-1;其中,P为该译码器的并行度,正移位单元的个数等于存储单元的个数,逆移位单元的个数等于存储单元的个数;并且存储单元的个数为大于或等于2的正整数;其中,上述计算单元配置为:
获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于基础矩阵的列数b的整数;判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;确定译码迭代次数i小于译码迭代阈值时,判断LDPC码的译码层数k是否小于基础矩阵的行数a;确定译码层数k小于基础矩阵的行数a时,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;根据基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;将译码层数k更新为k+1,重新执行判断LDPC码的译码层数k是否小于基础矩阵的行数a;确定译码层数k大于或等于基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,直至译码迭代次数i等于译码迭代阈值,将第k层变量节点n的软比特信息确定为变量节点n的待译码的软比特信息的译码结果;其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为所述变量节点n的待译码的软比特信息。
这里,QC-LDPC码一般用一个基础矩阵H(a×b)和Z维的单位阵及其循环移位矩阵来表示,将基础矩阵H(a×b)和Z维的单位阵及其循环移位矩阵或者零阵来表示得到扩展后的矩阵H(Z*a×Z*b),为QC-LDPC码的稀疏奇偶校验矩阵。该矩阵包括Z*a个校验节点,Z*b个变量节点,Z也称为基础矩阵的扩展因子,且Z为正整数。
为了便于描述,定义i为迭代次数索引,可表示从第1次迭代到最大第Max_iter次迭代;定义k为基础矩阵H(a×b)的层索引,可表示从第0层到第a-1层;定义m为基础矩阵一层内的校验行索引,可表示从第1行到第Z行;定义n为第k层的变量节点索引,n'为第k+1层的变量节点索引,n∈N(m)表示所有与校验节点m相连的变量节点索引n的集合,同理n'∈N(m),其中0≤n≤b-1,0≤n′≤b-1;定义
Figure PCTCN2019088398-appb-000001
为第k层变量节点n的先验信息;定义
Figure PCTCN2019088398-appb-000002
为第k层的从变量节点n到校验节点m的变量节点信息;定义
Figure PCTCN2019088398-appb-000003
为第k层的从校验节点m到变量节点n的旧校验节点信息,同理,定义
Figure PCTCN2019088398-appb-000004
为第k层的从校验节点m到变量节点n的新校验节点信息。
具体来说,计算单元可以从译码器的预处理单元获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息,那么在获取之前,预处理单元先要确定出生成变量节点n的待译码的软比特信息,为了生成变量节点n的待译码的软比特信息,在一种可选的实施例中,该译码器还包括预处理单元,该预处理单元与该计算单元相连接,在计算单元获取变量节点n的待译码的软比特信息之前,对应地,预处理单元确定变量节点n的待译码的软比特信息,可以包括:获取待译码的软比特信息;对待译码的软比特信息进行预处理,得到变量节点n的待译码的软比特信息。
其中,上述预处理可以包括对待译码的软比特信息进行格式转换、存储、移位和路由,从而使得计算单元可以获取到变量节点n的待译码的软比特信息。为了实现对待译码的软比特信息的预处理,在一种可选的实施例中,预处理单元可以包括格式转换单元、路由单元、至少一个正移位单元和至少一个存储单元;其中,上述格式转换单元连接至上述至少一个存储单元,上述至少一个存储单元连接至上述至少一个正移位单元,上述至少一个正移位单元连接至上述路由单元,上述路由单元连接至上述至少一 个计算单元,上述流水线控制单元分别连接至上述至少一个正移位单元和上述至少一个计算单元,正移位单元与存储单元一一对应;相应地,预处理单元对待译码的软比特信息进行预处理,得到变量节点n的待译码的软比特信息,可以包括:
格式转换单元对待译码的软比特信息进行分块,得到至少一个待译码的软比特信息块,将至少一个待译码的软比特信息块发送至对应的存储单元进行存储;存储单元接收对应的待译码的软比特信息块,进行分组存储;流水线控制单元确定正移位单元接收到待译码的软比特信息,触发正移位单元,正移位单元对待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息;
路由单元按照预设的路由方式,将移位后的待译码的软比特信息输出至至少一个计算单元,使得计算单元获取到变量节点n的待译码的软比特信息。
具体来说,格式转换单元,配置为将输入的长度为Z*b的待译码码字的软比特信息分为b个软比特信息块,每个软比特信息块包含Z个软比特信息,对应一个Z维的扩展子矩阵,同时,格式转换单元根据并行度P和Z值,将每个软比特信息块以预设格式进行存储,将完成格式转换后的软比特信息块输入到存储单元,其中,格式转换过程可以通过乒乓方式对软比特信息块进行处理。
存储单元,配置为存储待译码的软比特信息块,每个存储单元存储一个或多个软比特信息块,并且能够同时提供P个软比特信息,进行P个校验行的并行计算,存储单元的个数n v根据基础矩阵列的正交特性,可以小于或等于基础矩阵列数b。
流水线控制单元配置为确定正移位单元接收到待译码的软比特信息,触发正移位单元,正移位单元,配置为将存储单元输出的软比特信息进行移位旋转并输出,正移位单元包含n v个移位单元,每个移位单元接收对应存储单元输出的P个软比特信息,并根据预设的移位值完成对P个软比特信息的正循环移位输出。
路由单元,主要配置为将每个移位单元输出的P个软比特信息按照固 定的路由连接方式传送到计算单元中变量节点计算单元(VNU,Variable Node Unit)的数据接收端口,或者配置为将变量节点软比特信息更新单元(UPD,Updata)各个数据端口输出的软比特信息,按照固定的路由连接方式传送到每个逆移位单元,每个逆移位单元接收P个软比特信息。
其中,为了节省存储单元的个数,减少硬件资源的消耗,在一种可选的实施例中,格式转换单元对待译码的软比特信息进行分块,得到至少一个待译码的软比特信息,将至少一个待译码的软比特信息块发送至对应的存储单元进行存储,可以包括:
根据基础矩阵的列数b,将待译码的软比特信息分成b个待译码的软比特信息块;其中,每个待译码的软比特信息块与基础矩阵的第n列的元素值相对应;从b个待译码的软比特信息块中,将基础矩阵中具有正交性的列对应的待译码的软比特信息块,存储至同一存储单元中。
也就是说,在得到b个待译码的软比特信息块之后,每个待译码的软比特信息块与基础矩阵每列的元素值相对应,判断基础矩阵中哪些列之间具有正交性,其中,具有正交性的列指的是任意两列之间不同时为非“-1”元素,那么,将两列之间不同时为非“-1”元素对应的待译码的软比特信息块存储至同一个存储单元中。
每一个存储单元可以将待译码的软比特信息块进行分组,然后再存储,在一种可选的实施例中,存储单元接收对应的待译码的软比特信息块,进行分组存储,可以包括:
将待译码的软比特信息块分成Z个待译码的软比特信息,按照每组的个数为预设的译码并行度P,将Z个待译码的软比特信息进行分组,并存储至存储单元中;
其中,Z为基础矩阵的扩展因子,且Z为正整数。
也就是说,上述将每个待译码的软比特信息块分成Z个待译码的软比特信息,然后按照每组P个可以将每个待译码的软比特信息块分成Z/P组,存储至存储单元中。
其中,上述译码并行度P为译码器中计算单元的个数。
为了实现各个计算单元的计算过程,在一种可选的实施例中,正移位单元对待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息,包括:
根据基础矩阵第k行的元素值,确定对应的存储单元的移位值;根据移位值,对接收到的待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息。
其中,上述第k行的元素值确定对应存储单元的移位值可以通过预设的算法公式在实现,这样可以灵活地设置每个存储单元的移位值,正移位单元按照对应存储单元的移位值对接收到的对应的存储单元的待译码的软比特信息进行正循环移位,然后按照预设的路由方式,将移位后的待译码的软比特信息输出至至少一个计算单元,从而将每个移位单元输出的软比特信息置换到对应的校验节点进行计算。
在获取到LDPC码的基础矩阵中变量节点n的待译码的软比特信息之后,由于本申请实施例中采用迭代的方法进行计算,所以,首先计算单元判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,若小于,说明迭代还未完成,若等于,说明迭代结束。
计算单元若在确定译码迭代次数i小于译码迭代阈值,说明迭代还未完成,由于在本申请实施例中是按层进行译码的,所以还需要判断LDPC码的译码层数k是否小于基础矩阵的行数a。
计算单元在确定译码层数k小于基础矩阵的行数a时,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息中,具体来说,若确定译码层数k小于基础矩阵的行数a时,说明译码还未完成,在具体实施过程中,在确定基础矩阵第k层变量节点n的软比特信息时,可以同时根据已经知晓的基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息。
在一种可选的实施例中,计算单元确定译码层数k小于基础矩阵的行数a时,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第 k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,可以包括:将基础矩阵第k-1层变量节点n的软比特信息,减去上一次迭代中基础矩阵第k层变量节点n到校验节点m的检验节点信息,加上基础矩阵第k层变量节点n到校验节点m的检验节点信息,得到的值确定为基础矩阵第k层变量节点n的软比特信息。
具体来说,基础矩阵第k层变量节点n的软比特信息可以通过以下公式计算:
Figure PCTCN2019088398-appb-000005
其中,
Figure PCTCN2019088398-appb-000006
表示第k层变量节点n的软比特信息,
Figure PCTCN2019088398-appb-000007
表示第k-1层变量节点n的软比特信息,
Figure PCTCN2019088398-appb-000008
表示上一次迭代中基础矩阵第k层变量节点n到校验节点m的检验节点信息,
Figure PCTCN2019088398-appb-000009
表示基础矩阵第k层变量节点n到校验节点m的检验节点信息,这里,n∈N(m)。
基础矩阵第k+1层变量节点n到校验节点m的变量节点信息可以通过以下公式计算:
Figure PCTCN2019088398-appb-000010
其中,
Figure PCTCN2019088398-appb-000011
基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,
Figure PCTCN2019088398-appb-000012
表示基础矩阵第k-1层变量节点n的软比特信息,
Figure PCTCN2019088398-appb-000013
表示上一次迭代中基础矩阵第k+1层变量节点n到校验节点m的检验节点信息。
在根据基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定基础矩阵第k+1层校验节点m到变量节点n的校验节点信息中,在计算得到第k+1层变量节点n到校验节点m的变量节点信息之后,基础矩阵第k+1层校验节点m到变量节点n的校验节点信息可以通过以下公式计算:
Figure PCTCN2019088398-appb-000014
其中,
Figure PCTCN2019088398-appb-000015
表示基础矩阵第k+1层校验节点m到变量节点n的校验节点信息,函数f可以采用最小和(MS,Min-Sum)算法、归一化最小和(NMS,Normalised Offset)算法、偏移量最小和(OMS,Offset Min-Sum)算法中的一种,函数f的输入信息为除去变量节点n以外的所有与第k+1 层校验节点m相连的变量节点信息
Figure PCTCN2019088398-appb-000016
其中j∈N(m),j≠n。
计算单元将译码层数k更新为k+1,重新执行判断LDPC码的译码层数k是否小于基础矩阵的行数a,确定译码层数k大于或等于基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;
在完成第k层译码之后,将译码层数k更新为k+1,返回判断LDPC码的译码层数k是否小于基础矩阵的行数a,若k大于或等于基础矩阵的行数a,完成本次迭代,迭代次数i更新为i+1,返回执行S102,直至译码迭代次数i等于译码迭代阈值。
计算单元译码迭代次数i等于译码迭代阈值,将第k层变量节点n的软比特信息确定为变量节点n的待译码的软比特信息的译码结果。
这里,译码迭代次数i等于译码迭代阈值时,说明完成了整个迭代过程,那么,将迭代结果记为译码结果,即,将第k层变量节点n的软比特信息确定为变量节点n的待译码的软比特信息的译码结果。
其中,k的初始值为0,且当k=0、1或者2时,第k-2层的变量节点n到校验节点m的变量节点信息为变量节点n的待译码的软比特信息。
另外,在计算单元在完成译码之后,得到变量节点n的软比特信息后,需要将变量节点n的软比特信息存储起来,在一种可选的实施例中,译码器还包括至少一个逆移位单元,至少一个逆移位单元与存储单元一一对应;在直至译码迭代次数i等于译码迭代阈值之后,该方法还包括:
路由单元接收来自至少一个计算单元的译码结果,按照预设的路由方式,将译码结果输出至少至一个逆移位单元;流水线控制单元确定逆移位单元接收到译码结果,触发逆移位单元,逆移位单元按照移位值,将译码结果逆循环移位输出至对应的存储单元。
其中,上述灵活地设置每个存储单元的移位值,逆移位单元按照对应存储单元的移位值对接收到的变量节点n的软比特信息按照对应存储单元的移位值,逆循环移位输出至对应的存储单元存储起来。
下面举实例来对上述一个或多个实施例中译码器进行说明。
图2为本申请实施例中计算单元的一种可选的结构示意图,如图2所示,计算单元配置为完成一个校验行的变量节点信息的计算,校验节点信息的计算,校验节点信息的存储以及变量节点软比特信息的更新计算,P个计算单元并行完成P个校验行的并行计算。每个计算单元包括VNU,校验节点计算单元(CNU,Check Node Unit)、UPD和校验节点信息存储单元(CRAM,Check Random-Access Memory)。
VNU,配置为完成变量节点信息的计算,每个VNU同时接收到CRAM输出的与该校验节点相连的所有变量节点的校验节点信息,并同时完成与该校验节点相连的所有变量节点信息的计算,即同一校验行的变量节点信息全并行计算。
CNU,配置为完成校验节点信息的计算,每个CNU同时接收到VNU输出的变量节点信息,完成该检验节点到与其相连的所有变量节点的校验节点信息的计算,即同一校验行的所有校验节点信息全并行计算,CNU将计算得到的校验节点信息输出到CRAM进行更新存储,同时输出到UPD用于更新计算。
CRAM,配置为完成校验节点信息的更新和存储,输出前一次迭代计算得到的校验节点信息用于VNU完成变量节点信息的计算,接收当前迭代过程中的CNU输出的校验节点信息并更新存储。
UPD,配置为完成变量节点软比特信息的更新计算。接收CNU输出的新的校验节点信息,从CRAM中读取旧的校验节点信息,全并行完成该校验行的所有变量节点软比特信息的更新计算。
基于上述译码器,图3为本申请实施例中基础矩阵的一种可选的排布示意图,在本实例中的基础矩阵H如图3所示,该基础矩阵为H(5×9),扩展因子Z=16,即,基础矩阵H由5×9个16×16的循环移位子矩阵构成,矩阵中的非“-1”元素表示该循环移位子矩阵的移位值,“-1”元素位置的子矩阵为零阵;该基础矩阵包括5层,即layer0、layer1、layer2、layer3和layer4,每层有16个校验行,每层中的16个校验行采用部分并行计算方式,该实例中采用的部分并行度P=4,本申请实施例对部分并行度P不做具体限制,每一层中的16个校验节点c0~c15被分成Z/P=4组,第一组包括:c0、 c4、c8、c12;第二组包括:c1、c5、c9、c13;第三组包括:c2、c6、c10、c14;第四组包括:c3、c7、c11、c15。每组内的四个校验节点并行处理,每组串行处理。
图4为本申请实施例中存储单元的一种可选的存储格式示意图,如图4所示,将每个软比特信息块根据并行度P和扩展因子Z进行相应的存储格式转换,本实例将每个软比特信息块的16个软比特信息:LLR0~LLR15,分为4组,每组分别存储在RAM0、RAM1、RAM2、RAM3的地址Addr0、Addr1、Addr2、Addr3,其中,第一组包括:LLR0、LLR4、LLR8、LLR12;第二组包括:LLR1、LLR5、LLR9、LLR13;第三组包括:LLR2、LLR6、LLR10、LLR14;第四组包括:LLR3、LLR7、LLR11、LLR15。
为了能够以较快的速度将串行或并行输入的软比特信息块写入RAM0~3中,每组的软比特信息之间可以采用类似“螺旋”的方式进行存储,每个软比特信息的写地址可以采用“查找表”或者计算的方式得到。本实例的待译码码字中共包含9个软比特信息块,软比特信息存储单元接收格式转换单元输出的软比特信息块,并将每个软比特信息块根据基础矩阵H的列正交特性,单独或共同存储在存储单元中。
本实例中基础矩阵H的变量节点VN0、VN1、VN2、VN3所对应的列互不正交,所对应的软比特信息块分别存储在不同的存储单元中,变量节点VN4、VN5、VN6、VN7、VN8所对应的列相互正交,所对应的软比特信息块共同存储在同一个存储单元中。
其中,列的正交性指的是任意两列之间相同位置不同时为非“-1”元素。
图5为本申请实施例中存储单元的一种可选的结构示意图,如图5所示,每个存储单元的存储位宽等于4倍软比特信息位宽,存储单元0,存储变量节点VN0的软比特信息块;存储单元1,存储变量节点VN1的软比特信息块;存储单元2,存储变量节点VN2的软比特信息块;存储单元3,存储变量节点VN3的软比特信息块;存储单元4,存储变量节点VN4、VN5、VN6、VN7、VN8的软比特信息块,每个软比特信息块的起始存储地址分别为Addr0、Addr4、Addr8、Addr12、Addr16,由于对列正交的变量节点信息采用共同的存储单元,这种存储方式降低了译码器的存储资源消耗。
每个存储单元每次输出一个地址上存储的4个软比特信息,5个存储单元同时输出20个软比特信息,并行参加4个校验行的计算,共需要Z/P=4次完成一层共16个校验的计算。
Layer0的第1次计算对应第一组校验行c0、c4、c8和c12,存储单元0的初始读地址根据下式可以得到:
Init_addr0=mod(H 0,0,P)     (4)
计算为Init_addr=mod(4,4)=0;因此输出地址Addr0存储的软比特信息{VN0LLR0,VN0LLR4,VN0LLR8,VN0LLR12}。
以此类推:
存储单元1输出地址Addr2存储的软比特信息{VN1LLR10,VN1LLR14,VN1LLR2,VN1LLR6};
存储单元2输出地址Addr1存储的软比特信息{VN2LLR13,VN2LLR1,VN2LLR5,VN2LLR9};
存储单元3输出地址Addr3存储的软比特信息{VN3LLR7,VN3LLR11,VN3LLR15,VN3LLR3};
存储单元4输出地址Addr0存储的软比特信息{VN4LLR0,VN4LLR4,VN4LLR8,VN4LLR12}。
通过正移位单元和路由单元完成每个校验节点软比特信息到对应校验行的信息置换。图6为本申请实施例中移位单元的一种可选的结构示意图,如图6所示,包含5个正移位单元(移位单元0,移位单元1,移位单元2,移位单元3,移位单元4),每个正移位单元对应一个存储单元,并行接收存储单元每次计算输出的4个软比特信息,并根据预设移位值完成4个软比特信息的循环移位操作。
图7为本申请实施例中路由单元的一种可选的结构示意图,如图7所示,采用固定连接的方式,将每个正移位单元输出的软比特信息置换到对应的校验节点进行计算。
由于对列正交的变量节点信息采用共同的存储单元,存储单元个数减少,因为正移位单元的个数等于存储单元的个数,随之移位单元个数的减 少,降低了移位单元的硬件资源消耗,移位网络的连接复杂度也大大降低。
具体来说,变量节点和校验节点在Layer0的第1次计算过程中的信息置换方式如下:
软比特信息VN0LLR4、VN1LLR6、VN2LLR5、VN3LLR11和VN4LLR0参与校验行c0的计算;
软比特信息VN0LLR8、VN1LLR10、VN2LLR9、VN3LLR15和VN4LLR4参与校验行c4的计算;
软比特信息VN0LLR12、VN1LLR14、VN2LLR13、VN3LLR3和VN4LLR8参与校验行c8的计算;
软比特信息VN0LLR0、VN1LLR2、VN2LLR1、VN3LLR7和VN4LLR12参与校验行c12的计算;
这样就完成了一次译码计算过程中的软比特信息的读取,以及变量节点和校验节点间的信息置换,以此类推:
Layer0的第2次计算对应第2组校验行c1、c5、c9和c13,软比特信息读取方式如下:
存储单元0输出地址Addr1存储的软比特信息{VN1LLR13,VN1LLR1,VN1LLR5,VN1LLR9};
存储单元1输出地址Addr3存储的软比特信息{VN1LLR7,VN1LLR11,VN1LLR15,VN1LLR3};
存储单元2输出地址Addr2存储的软比特信息{VN2LLR10,VN2LLR14,VN2LLR2,VN2LLR6};
存储单元3输出地址Addr0存储的软比特信息{VN3LLR0,VN3LLR4,VN3LLR8,VN3LLR12};
存储单元4输出地址Addr1存储的软比特信息{VN4LLR13,VN4LLR1,VN4LLR5,VN4LLR9}。
Layer0的第2次计算的变量节点和校验节点的信息置换方式如下:
软比特信息VN0LLR5、VN1LLR7、VN2LLR6、VN3LLR12和VN4LLR1 参与校验行c1的计算;
软比特信息VN0LLR9、VN1LLR11、VN2LLR10、VN3LLR0和VN4LLR5参与校验行c5的计算;
软比特信息VN0LLR13、VN1LLR15、VN2LLR14、VN3LLR4和VN4LLR9参与校验行c9的计算;
软比特信息VN0LLR1、VN1LLR3、VN2LLR2、VN3LLR8和VN4LLR13参与校验行c13的计算;
Layer0的第3次计算对应第3组校验行c2、c6、c10和c14,软比特信息读取方式如下:
存储单元0输出地址Addr2存储的软比特信息{VN1LLR10,VN1LLR14,VN1LLR2,VN1LLR6};
存储单元1输出地址Addr0存储的软比特信息{VN1LLR0,VN1LLR4,VN1LLR8,VN1LLR12};
存储单元2输出地址Addr3存储的软比特信息{VN2LLR7,VN2LLR11,VN2LLR15,VN2LLR3};
存储单元3输出地址Addr1存储的软比特信息{VN3LLR13,VN3LLR1,VN3LLR5,VN3LLR9};
存储单元4输出地址Addr2存储的软比特信息{VN4LLR10,VN4LLR14,VN4LLR2,VN4LLR6}。
Layer0的第3次计算的变量节点和校验节点的信息置换方式如下:
软比特信息VN0LLR6、VN1LLR8、VN2LLR7、VN3LLR13和VN4LLR2参与校验行c2的计算;
软比特信息VN0LLR10、VN1LLR12、VN2LLR11、VN3LLR1和VN4LLR6参与校验行c6的计算;
软比特信息VN0LLR14、VN1LLR0、VN2LLR15、VN3LLR5和VN4LLR10参与校验行c10的计算;
软比特信息VN0LLR2、VN1LLR4、VN2LLR3、VN3LLR9和VN4LLR14 参与校验行c14的计算;
Layer0的第4次计算对应第3组校验行c3、c7、c11和c15,软比特信息读取方式如下:
存储单元0输出地址Addr3存储的软比特信息{VN1LLR7,VN1LLR11,VN1LLR15,VN1LLR3};
存储单元1输出地址Addr1存储的软比特信息{VN1LLR13,VN1LLR1,VN1LLR5,VN1LLR9};
存储单元2输出地址Addr0存储的软比特信息{VN2LLR0,VN2LLR4,VN2LLR8,VN2LLR12};
存储单元3输出地址Addr2存储的软比特信息{VN3LLR10,VN3LLR14,VN3LLR2,VN3LLR6};
存储单元4输出地址Addr3存储的软比特信息{VN4LLR7,VN4LLR11,VN4LLR15,VN4LLR3}。
Layer0的第4次计算的变量节点和校验节点的信息置换方式如下:
软比特信息VN0LLR7、VN1LLR9、VN2LLR8、VN3LLR14和VN4LLR3参与校验行c3的计算;
软比特信息VN0LLR11、VN1LLR13、VN2LLR12、VN3LLR2和VN4LLR7参与校验行c7的计算;
软比特信息VN0LLR15、VN1LLR1、VN2LLR0、VN3LLR6和VN4LLR11参与校验行c11的计算;
软比特信息VN0LLR3、VN1LLR5、VN2LLR4、VN3LLR10和VN4LLR15参与校验行c15的计算;
一个计算单元完成一个校验行的计算,本实例的译码器需要4个并行的计算单元每次计算并行完成4个校验行的计算,分4次完成一层内16个校验行的计算。其中,每个计算单元中的VNU、CNU以及UPD,采用流水线方式完成一层内的所有校验行的计算,层与层之间采用分层流水方式,图8为本申请实施例中分层译码的一种可选的流水线示意图,如图8所示,第k层的计算基于第k-1层的计算结果进行,每一层第一次计算的4个校验 行数据首先并行输入到各自对应的计算单元进行计算,第一次VNU计算过程,记为VNU0;第一次CNU计算过程,记为CNU0;第一次UPD计算过程,记为UPD0;以此类推,当第四次计算的VNU4过程、CNU4过程和UPD4过程全部完成后,结束该层流水计算,并启动下一层流水。
采用分层流水方式可以保证每一层的计算都是基于前一层的最新结果来进行,即标准分层译码方法。在层与层之间的流水计算过程中,VNU、CNU和UPD计算单元都存在较长的流水空闲时间,导致流水线效率降低;具体来说,一次VNU流水计算时延为T vnu,一次CNU流水计算时延为T cnu,一次UPD流水计算时延为T upd,单层内每次计算间隔时间为单位时间1,那么,在本实施例中一次迭代共需要5层流水,一次迭代译码时延为:
T iter=(T vnu+T cnu+T upd+4)×5      (5)
流水线控制单元输出使能信号,控制正移位单元从存储单元读取软比特信息进行移位置换的流水过程,控制VNU计算单元进行变量节点信息计算的流水过程,控制CNU计算单元进行校验节点信息计算的流水过程,控制UPD计算单元进行软比特信息更新的流水过程,控制逆移位单元对新软比特信息进行移位置换并更新存储单元的流水过程。
本实例中的译码器采用了分层流水线模式实现QC-LDPC码的标准分层译码过程,整个译码器的并行度为4,即需要4个计算单元同时完成4个校验行的并行计算,对正交列的变量节点软比特信息共用存储单元,将存储单元个数从9个减少到5个,同时正/逆移位单元个数也从9个减少到5个,由于移位单元个数减少,移位单元与计算单元之间的路由单元复杂度也大大降低,由此可见,采用本申请实施例译码器的存储方式,可以降低译码器的存储资源消耗及硬件实现复杂度,另外,本申请实施例的译码器采用固定并行度P的部分并行译码方式,对软比特信息块采用“螺旋”方式进行存储,在不增加译码器复杂度的前提前,能够对不同扩展因子Z的QC-LDPC码进行译码,译码灵活度高。
图9为本申请实施例中层交叠译码的一种可选的流水线示意图,该译码方法同样可以应用于上述译码器中,在本实例中,采用与上述分层译码方法相同的基础矩阵H,相同的扩展因子Z,相同的并行度P,因此本实例 与分层译码方法只有在译码模式上不同,本实例采用了快速分层译码模式,为简单起见,本实例只描述与分层译码方法不同的内容。
一个计算单元完成一个校验行的计算,本实例的译码器需要4个并行的计算单元每次计算并行完成4个校验行的计算,分4次完成一层内16个校验行的计算,每个计算单元中的VNU、CNU以及UPD,采用流水线方式完成一层内的所有校验行的计算,层与层之间采用层交叠流水方式,层交叠流水示意图如图9所示。
根据快速分层译码方法,相邻两层之间的计算不存在依赖关系,即,第k层的VNU计算、CNU计算以及UPD计算都依赖于第k-2层的软比特信息计算结果,因此不需要等待第k-1层的最后一次UPD流水计算完成,可以提前启动第k层的VNU流水计算;本实例中,第0层的VNU3流水计算开始后,即可启动第1层的VNU0流水计算;同理,第1层的VNU3流水计算开始后,即可启动第2层的VNU0流水计算;以此类推,第3层的VNU3流水计算开始后,即可启动第4层的VNU0流水计算;直至第4层UPD3流水计算完成,结束本次迭代计算。
采用层交叠流水方式,层与层之间的流水计算连续进行,可以保证VNU、CNU和UPD计算单元在整个迭代计算过程中不存在空闲等待时间,流水线效率高,若一次VNU流水计算时延为T vnu,一次CNU流水计算时延为T cnu,一次UPD流水计算时延为T upd,单层内每次计算间隔时间为单位时间1,那么,在本实施例中一次迭代共需要5层每层4次流水,一次迭代译码时延为:
T iter=T vnu+T cnu+T upd+4×5      (6)
流水线控制单元输出使能信号,控制正移位单元从存储单元读取软比特信息进行移位置换的流水过程,控制VNU计算单元进行变量节点信息计算的流水过程,控制CNU计算单元进行校验节点信息计算的流水过程,控制UPD计算单元进行软比特信息更新的流水过程,控制逆移位单元对新软比特信息进行移位置换并更新软比特信息存储单元的流水过程,可见,流水线控制单元,配置为完成译码迭代过程中的流水线控制,在标准分层译码模式下采用分层流水线,在快速分层译码模式下采用层交叠流水线,通 过生成流水线各级使能信号,来实现流水线模式的切换功能。
本实例中的译码器采用了层交叠流水线模式实现QC-LDPC码的快速分层译码过程,整个译码器的并行度为4,即需要4个计算单元同时完成4个校验行的并行计算,对正交列的变量节点软比特信息共用存储单元,将变量节点软比特信息存储单元个数从9个减少到5个,同时正/逆移位单元个数也从9个减少到5个,由于移位单元个数减少,移位单元与计算单元之间的路由单元复杂度也大大降低,由此可见,采用本申请实施例译码器的存储方式,可以降低译码器的存储资源消耗及硬件实现复杂度,另外,本申请实施例的译码器采用固定并行度P的部分并行译码方式,对软比特信息块采用“螺旋”方式进行存储,在不增加译码器复杂度的前提下,能够对不同扩展因子Z的QC-LDPC码进行译码,译码灵活度高。
本申请实施例所提供的译码器,首先,该译码器包括至少一个计算单元中,该计算单元配置为获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息,然后判断LDPC码的译码迭代次数i是否小于预设的译码迭代次数,若小于,再判断LDPC码的译码层数k是否小于基础矩阵的行数a,若小于,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,再根据基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定基础矩阵第k+1层校验节点m到变量节点n的校验节点信息,这样,在得到第k层变量节点n的软比特信息的同时,就生成下一层的变量节点信息,进而生成下一层的检验节点信息,与相关技术相比,提前生成下一层的变量节点信息和校验节点信息,为生成下一层变量节点n的软比特信息做好准备,从而可以加快译码速度,在得到第k层变量节点n的软比特信息,将k更新为k+1,进行下一层译码,直至完成a层译码,将迭代次数i更新为i+1,进行迭代,直至迭代次数等于预设的迭代次数,从而得到软比特信息,也就是说,在本申请实施例中,基础矩阵第k层变量节点n软比特信息生成的同时,就启动并生成下一层的变量节点信息,进而生成下一层的检验节点信息,那么,在此基础上,在进行第k+1层译码时,就可以直接根据第k层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n 的软比特信息,与相关技术相比,不需要等待第k层的变量节点n的软比特信息生成之后,就可以启动第k+1层的变量节点信息和第k+1层的检验节点信息的计算,加快了译码的速度,进而提高LDPC译码器的译码效率。
基于同一发明构思,本申请实施例还提供一种译码方法,图10为本申请实施例中译码方法的流程示意图,如图10所示,该方法应用于一译码器的至少一个计算单元中,其中,上述译码方法可以包括:
S1001:获取LDPC码的基础矩阵中变量节点n的待译码的软比特信息;
S1002:判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;
S1003:确定译码迭代次数i小于译码迭代阈值时,判断LDPC码的译码层数k是否小于基础矩阵的行数a;
S1004:确定译码层数k小于基础矩阵的行数a时,根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k层变量节点n的软比特信息,且根据基础矩阵第k-1层变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;
S1005:根据基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;
S1006:将译码层数k更新为k+1,重新执行判断LDPC码的译码层数k是否小于基础矩阵的行数a;
S1007:确定译码层数k大于或等于基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,
S1008:译码迭代次数i等于译码迭代阈值,将第k层变量节点n的软比特信息确定为变量节点n的待译码的软比特信息的译码结果;
其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于基础矩阵的列数b的整数。
在一种可选的实施例中,计算单元根据基础矩阵第k-1层的变量节点n的软比特信息,确定基础矩阵第k+1层变量节点n到校验节点m的变量节 点信息,包括:
将基础矩阵第k-1层变量节点n的软比特信息,减去上一次迭代中基础矩阵第k+1层变量节点n到校验节点m的检验节点信息,得到的差值确定为基础矩阵第k层变量节点n的软比特信息。
基于前述实施例,本实施例提供一种计算机存储介质,图11为本申请实施例中的计算机存储介质的结构示意图,如图11所示,该计算机存储介质110存储有计算机程序,上述计算机程序被处理器执行时实现如上述一个或多个实施例中所述的译码方法的步骤。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。

Claims (11)

  1. 一种译码器,包括至少一个计算单元,其中,所述计算单元配置为:
    获取低密度奇偶校验LDPC码的基础矩阵中变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于所述基础矩阵的列数b的整数;
    判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;
    确定译码迭代次数i小于所述译码迭代阈值时,判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;
    确定译码层数k小于所述基础矩阵的行数a时,根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k层变量节点n的软比特信息,且根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;
    根据所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定所述基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;
    将译码层数k更新为k+1,重新执行判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;
    确定译码层数k大于或等于所述基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,直至译码迭代次数i等于所述译码迭代阈值,将所述第k层变量节点n的软比特信息确定为所述变量节点n的待译码的软比特信息的译码结果;
    其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为所述变量节点n的待译码的软比特信息。
  2. 根据权利要求1所述的译码器,其中,所述计算单元根据所述基础矩阵第k-1层的变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息中,所述计算单元配置为:
    将所述基础矩阵第k-1层变量节点n的软比特信息,减去上一次迭代中所述基础矩阵第k+1层变量节点n到校验节点m的检验节点信息,得到的 差值确定为所述基础矩阵第k层变量节点n的软比特信息。
  3. 根据权利要求1所述的译码器,其中,所述译码器还包括预处理单元,所述预处理单元与所述计算单元相连接,在所述计算单元获取变量节点n的待译码的软比特信息之前,所述预处理单元配置为:
    获取待译码的软比特信息;
    对所述待译码的软比特信息进行预处理,得到变量节点n的待译码的软比特信息。
  4. 根据权利要求3所述的译码器,其中,所述预处理单元包括格式转换单元、路由单元、流水线控制单元、至少一个正移位单元和至少一个存储单元;其中,所述格式转换单元连接至所述至少一个存储单元,所述至少一个存储单元连接至所述至少一个正移位单元,所述至少一个正移位单元连接至所述路由单元,所述路由单元连接至所述至少一个计算单元,所述流水线控制单元分别连接至所述至少一个正移位单元和所述至少一个计算单元,所述正移位单元与所述存储单元一一对应;
    所述格式转换单元配置为:对所述待译码的软比特信息进行分块,得到至少一个待译码的软比特信息块,将至少一个待译码的软比特信息块发送至对应的存储单元进行存储;
    所述存储单元配置为:接收待译码的软比特信息块,进行分组存储;
    所述流水线控制单元配置为确定所述正移位单元接收到待译码的软比特信息,触发所述正移位单元,所述正移位单元配置为对待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息;
    所述路由单元配置为:按照预设的路由方式,将移位后的待译码的软比特信息输出至所述至少一个计算单元,使得所述计算单元获取到变量节点n的待译码的软比特信息。
  5. 根据权利要求4所述的译码器,其中,所述格式转换单元对所述待译码的软比特信息进行分块,得到至少一个待译码的软比特信息,将至少一个待译码的软比特信息块发送至对应的存储单元进行存储中,所述格式转换单元体配置为:
    根据所述基础矩阵的列数b,将所述待译码的软比特信息分成b个待译码的软比特信息块;其中,每个待译码的软比特信息块与所述基础矩阵的第n列的元素值相对应;
    从b个待译码的软比特信息块中,将所述基础矩阵中具有正交性的列对应的待译码的软比特信息块,存储至同一存储单元中。
  6. 根据权利要求4所述的译码器,其中,所述存储单元接收对应的待译码的软比特信息块,进行分组存储中,所述存储单元配置为:
    将待译码的软比特信息块分成Z个待译码的软比特信息,按照每组的个数为预设的译码并行度P,将Z个待译码的软比特信息进行分组,并存储至所述存储单元中;
    其中,Z为所述基础矩阵的扩展因子,且Z为正整数。
  7. 根据权利要求4所述的译码器,其中,所述正移位单元对待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息中,所述正移位单元配置为:
    根据所述基础矩阵第k行的元素值,确定对应的移位值;
    根据所述移位值,对接收到的待译码的软比特信息进行正循环移位,得到移位后的待译码的软比特信息。
  8. 根据权利要求4所述的装置译码器,其中,所述译码器还包括至少一个逆移位单元,所述至少一个逆移位单元分别连接至所述至少一个存储单元、所述路由单元和所述流水线控制单元,所述至少一个逆移位单元与所述至少一个存储单元一一对应;
    所述路由单元配置为:在直至译码迭代次数i等于所述译码迭代阈值之后,接收来自所述至少一个计算单元的所述译码结果,按照预设的路由方式,将所述译码结果输出至所述至少一个逆移位单元;
    所述流水线控制单元配置为:确定所述逆移位单元接收到所述译码结果,触发所述逆移位单元,所述逆移位单元配置为:按照对应存储单元的移位值,将所述译码结果逆循环移位输出至与所述对应的存储单元。
  9. 一种译码方法,所述方法应用于一译码器的至少一个计算单元中, 所述方法包括:
    获取低密度奇偶校验LDPC码的基础矩阵中变量节点n的待译码的软比特信息;其中,n为大于或等于0且小于所述基础矩阵的列数b的整数;
    判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值;
    确定译码迭代次数i小于所述译码迭代阈值时,判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;
    确定译码层数k小于所述基础矩阵的行数a时,根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k层变量节点n的软比特信息,且根据所述基础矩阵第k-1层变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息;
    根据所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,确定所述基础矩阵第k+1层校验节点m到变量节点n的校验节点信息;
    将译码层数k更新为k+1,重新执行判断所述LDPC码的译码层数k是否小于所述基础矩阵的行数a;
    确定译码层数k大于或等于所述基础矩阵的行数a时,将迭代次数i更新为i+1,重新执行判断所述LDPC码的译码迭代次数i是否小于预设的译码迭代阈值,直至译码迭代次数i等于所述译码迭代阈值,将所述第k层变量节点n的软比特信息确定为所述变量节点n的待译码的软比特信息的译码结果;
    其中,当i的初始值为0,k的初始值为0,且当k=0或者1时,第k-1层的变量节点n的软比特信息为所述变量节点n的待译码的软比特信息。
  10. 根据权利要求9所述的方法,其中,所述根据所述基础矩阵第k-1层的变量节点n的软比特信息,确定所述基础矩阵第k+1层变量节点n到校验节点m的变量节点信息,包括:
    将所述基础矩阵第k-1层变量节点n的软比特信息,减去上一次迭代中所述基础矩阵第k层变量节点n到校验节点m的检验节点信息,加上所述基础矩阵第k层变量节点n到校验节点m的检验节点信息,得到的值确定为所述基础矩阵第k层变量节点n的软比特信息。
  11. 一种计算机存储介质,所述计算机介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求9至10中任一项所述的译码方法的步骤。
PCT/CN2019/088398 2018-06-29 2019-05-24 译码器、译码方法和计算机存储介质 WO2020001212A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19824821.3A EP3829088B1 (en) 2018-06-29 2019-05-24 Decoder, decoding method, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810717244.XA CN110661593B (zh) 2018-06-29 2018-06-29 一种译码器、方法和计算机存储介质
CN201810717244.X 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020001212A1 true WO2020001212A1 (zh) 2020-01-02

Family

ID=68985419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088398 WO2020001212A1 (zh) 2018-06-29 2019-05-24 译码器、译码方法和计算机存储介质

Country Status (3)

Country Link
EP (1) EP3829088B1 (zh)
CN (1) CN110661593B (zh)
WO (1) WO2020001212A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826283A (zh) * 2021-01-27 2022-07-29 华为技术有限公司 译码方法、装置、设备以及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825770A (zh) * 2005-02-26 2006-08-30 美国博通公司 解码ldpc编码信号的加速消息传递解码器和方法
CN1956368A (zh) * 2005-10-26 2007-05-02 中兴通讯股份有限公司 基于单位阵及其循环移位阵的ldpc码向量译码装置和方法
CN101867449A (zh) * 2010-06-04 2010-10-20 深圳国微技术有限公司 基于地面数字电视的高效ldpc译码器
US20130173982A1 (en) * 2011-12-29 2013-07-04 Korea Advanced Institute Of Science And Technology (Kaist) Method of decoding ldpc code for producing several different decoders using parity-check matrix of ldpc code and ldpc code system including the same
CN103973315A (zh) * 2013-01-25 2014-08-06 中兴通讯股份有限公司 一种低密度奇偶校验码译码装置及其译码方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100037121A1 (en) * 2008-08-05 2010-02-11 The Hong Kong University Of Science And Technology Low power layered decoding for low density parity check decoders
CN101958718B (zh) * 2009-07-14 2013-03-27 国民技术股份有限公司 用于ldpc码的改进型半并行译码器和译码方法
CN101615913B (zh) * 2009-07-17 2011-04-27 清华大学 Ldpc码的快速收敛译码方法
CN102664638A (zh) * 2012-05-31 2012-09-12 中山大学 基于分层nms算法的多码长ldpc码译码器的fpga实现方法
CN104868925B (zh) * 2014-02-21 2019-01-22 中兴通讯股份有限公司 结构化ldpc码的编码方法、译码方法、编码装置和译码装置
CN106849959B (zh) * 2016-12-30 2020-08-11 深圳忆联信息系统有限公司 数据处理方法及译码器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825770A (zh) * 2005-02-26 2006-08-30 美国博通公司 解码ldpc编码信号的加速消息传递解码器和方法
CN1956368A (zh) * 2005-10-26 2007-05-02 中兴通讯股份有限公司 基于单位阵及其循环移位阵的ldpc码向量译码装置和方法
CN101867449A (zh) * 2010-06-04 2010-10-20 深圳国微技术有限公司 基于地面数字电视的高效ldpc译码器
US20130173982A1 (en) * 2011-12-29 2013-07-04 Korea Advanced Institute Of Science And Technology (Kaist) Method of decoding ldpc code for producing several different decoders using parity-check matrix of ldpc code and ldpc code system including the same
CN103973315A (zh) * 2013-01-25 2014-08-06 中兴通讯股份有限公司 一种低密度奇偶校验码译码装置及其译码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3829088A4 *

Also Published As

Publication number Publication date
CN110661593A (zh) 2020-01-07
EP3829088A1 (en) 2021-06-02
EP3829088A4 (en) 2021-08-04
EP3829088B1 (en) 2024-01-17
CN110661593B (zh) 2022-04-22

Similar Documents

Publication Publication Date Title
CN107370490B (zh) 结构化ldpc的编码、译码方法及装置
US7941737B2 (en) Low density parity check code decoder
Petrović et al. Flexible high throughput QC-LDPC decoder with perfect pipeline conflicts resolution and efficient hardware utilization
US8392789B2 (en) Method and system for decoding low density parity check codes
RU2369008C2 (ru) Устройство и способ кодирования-декодирования блочного кода проверки на четность с низкой плотностью с переменной длиной блока
CN107919874B (zh) 校验子计算基本校验节点处理单元、方法及其计算机程序
Marchand et al. Architecture and finite precision optimization for layered LDPC decoders
WO2018036178A1 (zh) 一种ldpc的解码方法
CN107979445B (zh) 使用预先排序的输入的基于基本校验节点的校正子解码
CN107404321B (zh) 用于纠错码解码的方法和设备
CN112332856B (zh) 一种准循环ldpc码的层译码方法及装置
WO2019205313A1 (zh) 一种基于随机比特流更新的ldpc译码器
Abbas et al. Low complexity belief propagation polar code decoder
WO2017084024A1 (zh) 低密度奇偶校验码的译码方法和译码器
WO2021063217A1 (zh) 一种译码方法及装置
Lacruz et al. High-performance NB-LDPC decoder with reduction of message exchange
Chen et al. A 2.37-Gb/s 284.8 mW rate-compatible (491, 3, 6) LDPC-CC decoder
WO2020001212A1 (zh) 译码器、译码方法和计算机存储介质
CN111034055A (zh) 在非二进制ldpc解码器中简化的校验节点处理
Zhao et al. DVB-T2 LDPC decoder with perfect conflict resolution
CN115037310B (zh) 一种基于随机计算的5g ldpc译码器性能优化方法及架构
CN113595564B (zh) 基于信息截断的低复杂度多进制ldpc码译码器装置
Boncalo et al. Memory trade-offs in layered self-corrected min-sum LDPC decoders
US11483011B2 (en) Decoding method, decoding device, and decoder
WO2017193614A1 (zh) 结构化ldpc的编码、译码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19824821

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019824821

Country of ref document: EP

Effective date: 20210127