GB2431833A - Decoding low-density parity-check codes using subsets of bit node messages and check node messages - Google Patents

Decoding low-density parity-check codes using subsets of bit node messages and check node messages Download PDF

Info

Publication number
GB2431833A
GB2431833A GB0521858A GB0521858A GB2431833A GB 2431833 A GB2431833 A GB 2431833A GB 0521858 A GB0521858 A GB 0521858A GB 0521858 A GB0521858 A GB 0521858A GB 2431833 A GB2431833 A GB 2431833A
Authority
GB
United Kingdom
Prior art keywords
algorithm
bit
subset
check
messages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0521858A
Other versions
GB2431833B (en
GB0521858D0 (en
Inventor
Thierry Lestable
Sung Eun Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to GB0521858A priority Critical patent/GB2431833B/en
Publication of GB0521858D0 publication Critical patent/GB0521858D0/en
Priority to US11/586,759 priority patent/US8006161B2/en
Priority to KR1020060104732A priority patent/KR101021465B1/en
Publication of GB2431833A publication Critical patent/GB2431833A/en
Application granted granted Critical
Publication of GB2431833B publication Critical patent/GB2431833B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1117Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using approximations for check node processing, e.g. an outgoing message is depending on the signs and the minimum over the magnitudes of all incoming messages according to the min-sum rule
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/35Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/005Iterative decoding, including iteration between signal detection and decoding operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Error Detection And Correction (AREA)

Abstract

Decoding low-density parity-check (LDPC) codes using an iterative decoding structure where for each iteration (referring to Fig. 3b), a plurality of bit nodes and a plurality of check nodes are updated <B>302</B>. For each bit node, a set of check node messages are used for updating that bit node and for each check node <B>306</B> a set of bit node messages are used for updating that check node <B>306</B>. In each check node update <B>302</B>, selecting a first subset of bit node messages from the set of bit node messages used for updating each check node <B>306</B> and selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages. Identifying a second subset of bit node messages, excluding the first subset of bit node messages, used for updating the check node <B>306</B>, and selecting a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages. The first subset of bit node messages may include a set of most contributing bit-node messages, or the smallest bit node messages, or least reliable bit node messages. A more computationally-demanding algorithm may be used in association with the most contributing bit nodes in the check node update process. The remaining bit nodes and their messages may be updated using a less computationally-demanding algorithm.

Description

<p>Decoding Low-Density Parity-Check Codes The present invention relates
to decoding Low-Density Parity-Check (LDPC) error control codes. It relates to decoders for use in digital systems, such as transmitters, receivers and transceivers using LDPC codes, and to methods of, and apparatus for, transmitting and receiving a coded signal, and to a signal thus produced.</p>
<p>Background</p>
<p>Low-density parity-check (LDPC) codes are error control codes that have the ability to provide impressive performance gains, near the Shannon capacity (or near Shannon limit error performance), for communications systems. Due to their performance, these codes are being included in future industry standards such as DVB-S2, IEEE 802.16e, or IEEE 802.1 ln (for example, the TGnSync & WWise IEEE 802.1 in proposal at http://www.wwise.org/technicalproposal.htm).</p>
<p>LDPC codes belong to the family of linear block codes. The encoder for a rate R(. = block code has k information bits denoted by pi = [p1,. . . , P,] that are encoded into n code bits denoted by I = [x1,. . ., x,,], called a codeword. This codeword is generated/encoded by the linear operation i = pt G, where G is a k by n generator matrix that defines the encoder. The codeword I is said to be in the code set C, which is the set of all codewords for the encoder defined by G. Unless otherwise stated, for simplicity all elements and operations of the encoder/decoder are defined on Galois Field GF(2), other higher dimensional Galois Fields may also be used.</p>
<p>The generator matrix G has a dual matrix H, which is an m = n -k by n parity check matrix. These matrices, G and H, are orthogonal, i.e. G. H" =0. The parity check matrix H defines a set of m parity check equations that are useful in the decoding process. The decoder can determine whether a received codeword estimate I is a valid codeword I = pt. G in the code set C by computing I. H7 before decoding the codeword estimate. Hence, is a valid codeword i when H" Qt.G).H" =0, otherwise if *H7' !=O then an error is detected. The decoder should be able to find the closest or the most likely codeword that was transmitted for each received codeword estimate. For more details on decoding of general linear block codes see Shu Lin and Daniel Costello "Error Control Coding: Fundamentals and Applications ", Prentice Hall.</p>
<p>LDPC codes are defined by an m by n sparse parity check matrix in which there is a lower density of ones compared to zeros. In general, the goal is to find a sparse generator matrix and a sparse parity check matrix for encoding and decoding.</p>
<p>These codes lend themselves to iterative decoding structures to realise near-Shannon capacity performance. The impressive performance of LDPC codes is realised by applying soft decision iterative decoding techniques such as the belief propagation (BP) algorithm as described by Jinghu Chen and Marc P.C. Fossorier, "Decoding Low-Density Parity Check codes with Normalized APP-Based Algorithm", 2001, IEEE, or Ajay Dholakia et. al, "Capacity-Approaching Codes: Can They Be Applied to the Magnetic Recording Channel?". IEEE Communications Magazine, pp 122-130, February 2004, and Engling Yeo et. al., "Iterative Decoder Architectures", IEEE Communication Magazine, pp 132-140, August 2003.</p>
<p>The soft decision iterative decoding algorithms are based on the BP algorithm work on the concept of bipartite Tanner graphs as described in Frank R Kschischang, "Codes Defined on Graphs", IEEE Communication Magazine, pp 118-125, August 2003 and Tom Richardson, "The Renaissance of Gallager's Low-Density Parity-Check Codes", IEEE Communications Magazine, pp 126-131, August 2003.</p>
<p>A bipartite Tanner graph is a set of graph vertices decomposed into two disjoint sets where these disjoint sets of vertices are a plurality of bit nodes (also called variable nodes or symbol nodes) and a plurality of parity check nodes, hereinafter called check nodes. The graph is bipartite as no two graph vertices within the same set are adjacent. That is, bit nodes are not connected via graph edges to each other, similarly with the check nodes.</p>
<p>Bipartite Tanner graphs are defined by the m by n parity check matrix H. Each bit node is connected via graph edges to a set of check nodes defined by M(i) ={ j II H i} for bit node i, where 1 = j = m, and m is the number of check nodes (or number of rows in H). Similarly, each check node is connected via graph edges to a set of bit nodes defined by.A1(j) ={ i II H1, = i} for check nodej, where 1 = j = n, and n is the number of bit nodes (or the number of columns of H and the number of code bits per codeword).</p>
<p>The iterative decoding algorithms that are based on the concept of bipartite Tanner graphs are called message passing algorithms. Conceptually, these algorithms use the structure of the Tanner graph to "pass" messages, along graph edges, from the bit nodes to check nodes and vice versa. Typically, in implementing an iterative algorithm, the messages can be variables stored in a memory, for example a set of messages for each bit node i that are passed to a set of check nodes can be stored as variables in an array, and operated on as such. The use of the terms, "pass" or "message passing" and the like, are for illustrative purposes to describe the iterative decoding algorithms that follow with reference to Tanner graphs.</p>
<p>The messages are used to iteratively estimate the code bits of the codeword to be estimated. In one iteration of an iterative decoding algorithm, each bit node passes messages, called bit node messages, that represent an estimate of its respective code bit to all neighbouring check nodes (i.e. those check nodes associatedlconnected with that bit node via edges). Each neighbouring check node also receives from other neighbouring bit nodes additional bit node messages. The neighbouring check nodes passes back to the original bit node a combination of all these bit node messages via messages called check node messages. This process occurs for every bit node, hence each bit node receives a check node message from each of its neighbouring check nodes and can combine these to form an estimate of its respective code bit. Overall, in each iteration an estimate of a codeword is produced.</p>
<p>Although Tanner graphs can operate on messages based in the Galois Field GF(2), in soft decision iterative decoding algorithms these messages typically represent probabilities or log-likelihood ratios thereof. Hence these messages and operations are not limited to being defined on GF(2), but can be defined on other fields such as, for example, the field of real numbers R. More specifically in terms of LDPC codes, each bit node i passes a set of bit node messages to the set of check nodes M(i), which are used in each check node to update the check node messages. Similarly, each check nodej passes a set of check node messages to the set of bit nodes AI(j). In each iteration of these algorithms, there are two main processes called the bit node update process, which updates the bit node messages, and the check node update process, which updates the check node messages.</p>
<p>The bit node and check node messages are considered the information obtainable from the received codeword about the transmitted codeword, which is called extrinsic bit node and check node messages (these terms may hereinafter be used interchangeably). In essence, these messages represent the reliability or the belief for the estimate of the n code bits of the transmittedlreceived codeword.</p>
<p>In the bit node update process, the bit nodes receive a priori information related to the channel. Hence, in an additive white Gaussian noise (AWGN) channel, for example, bit node i receives the log likelihood ratio of the i-th bit of the received code word estimate, given by the intrinsic information I,=-4y1, where a2 N0 /2 is the variance of the AWGN and N0 / 2 is the power spectral density. This is used to initialise the BP algorithm (or for that matter most of the other algorithms as well).</p>
<p>The bit node message that is passed from bit node ito check node] is denoted as]. This message T,, is updated by summing the set of check node messages passed from the set of check nodes M(i) to bit node i, but excluding the check node message passed from check node] to bit node i. This update process requires fewer computational resources (or has a lower complexity) compared with the check node update process.</p>
<p>It is the check node update process that contributes the greatest complexity to the computation of the BP algorithm. The computational burden comes from a non-linear function, cD(.) (defined below in equation (3)), used in each check node message update.</p>
<p>In the check node update, the check node message that is passed from check nodej to bit node i is denoted by E',. Each check node message E1 is updated by summing cI(T) over the set of bit node messages passed from the set of bit nodes JV(j) to check nodej, but excluding the bit node message T,, passed from bit node i to check node j. Finally, E,1 is updated by reapplying c1(.) to the non-linear summation of cI(T).</p>
<p>In an attempt to simplify matters, the sign of E,, and the magnitude of E,1 are computed separately. However, it is the magnitude of E,1 that contributes towards the overall complexity of the check node update process.</p>
<p>The iterations of the BP algorithm continue until either a predetermined or maximum number of iterations is achieved, or until the algorithm has converged, that is the bit node messages have converged. A soft decision (denoted T,) for bit node i is calculated by summing the intrinsic information I and the set of check node messages that are passed from the set of check nodes M(i) to bit node i. A hard decision is formed, for each of the soft decisions T, producing z,, giving the estimate codeword I = [z1,.. . , z,]. The parity check equations are applied, I H1', to check if the estimated codeword is in error and, depending on the error correction capability of the code, the codeword can decoded accordingly giving the information estimate The complexity of the decoding process is primarily due to the non-linear function used in the check node update process. Hence, reducing the complexity of this process is the focus of current research in the field of LDPC Codes. Several examples of reduced complexity LDPC decoders are disclosed in US 2005/020427 1 Al and US 2005/01385 19 Al.</p>
<p>A simplified LDPC decoding process is disclosed in US 2005/0204271 Al, which describes an iterative LDPC decoder that uses a "serial schedule" for processing the bit node and check node messages. Schedules are updating rules indicating the order of passing messages between nodes in the Tanner graph.</p>
<p>Additionally, an approximation to the BP algorithm called the Mm-Max algorithm is used in the iterative decoding process in order to reduce the complexity of the check node update process.</p>
<p>This is achieved by using a property of the non-linear function, (1)0, in which small values of T,, contribute the most to the summation of So, for each check node], the smallest magnitude bit node message is selected from the set of bit node messages passed to check node]. Only this bit node message is used in the approximation for updating check node]. That is, the least reliable bit node message is used and the most reliable bit node messages are discarded.</p>
<p>However, although a reduction in complexity is achieved, the result is a dramatic reduction in bit and frame error rate performance compared to the BP algorithm. This is due to the fact that the discarded extrinsic information has not been used in the check node update.</p>
<p>US 2005/01385 19 Al discloses an LDPC decoding process that simplifies the iterative BP algorithm by using a reduced set of bit node messages, i.e. a predetermined number, 2 > I, of bit node messages from the set of bit nodes.iV(]) that are passed to check node] are used in the check node update process. Hence, only the bit nodes that pass the bit node messages with the lowest magnitude levels (smallest or least reliable bit node messages) to check node] are identified and used in the check node update process.</p>
<p>The BP algorithm is then applied to this reduced set of bit node messages in the check node update process. This simplification results in what is called the Lambda-Mm algorithm, for more details see F. Guilloud, E. Boutillon, J.L. Danger, "Lambda-Mm Decoding Algorithm of Regular and Irregular LDPC Codes", 3nd International Symposium on Turbo Codes and Related Topics, Brest, France, pp 451 - 454, Sept. 2003. However, there is only a slight decrease in complexity compared with the BP-algorithm, and typically a large gap in the complexity and performance between 2 and 2+1 type Lambda-Mm algorithms leading to what is called poor granularity in terms of both complexity and performance, where a desired performance is not achievable due to computational resources not being available.</p>
<p>The present invention aims to overcome or at least alleviate one or more of the aforementioned problems.</p>
<p>The invention in various aspects is defined in the appended claims.</p>
<p>The invention in one aspect relates to decreasing the complexity of the decoding process, by focusing on the check node complexity, but maintaining near Shannon capacity performance. An iterative decoding process is realised by identifying and prioritising the most contributing bit nodes, and selecting more computational resources to be used to calculate the check node messages, i.e. representing its belief, reliability, probability or log-likelihood ratios thereof, as compared with the remaining bit nodes, which are also used in the check node update process. It has been realised that this gives a lower computational complexity sub-optimal decoding process with an excellent trade-off between complexity and performance, and granularity thereof, as compared with the previous sub-optimal decoding processes.</p>
<p>In one aspect, the present invention provides a method for iteratively decoding low-density parity-check (LDPC) codes. The method includes iteratively updating a plurality of bit nodes, and iteratively updating a plurality of check nodes.</p>
<p>For each bit node there is a set of check nodes and for each check node there is a set of bit nodes, and for each bit node, a set of check node messages used for updating that bit node, and for each check node, a set of bit node messages are used for updating that check node. In each iteration of the iterative decoding process, each check node update for each check node further includes identifying a first subset of bit node messages from the set of bit node messages that used to update each check node, and selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages. The method fui-ther includes selecting a second subset of bit node messages, excluding the first subset of bit node messages, used to update each check node and selecting a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p>
<p>In a preferred embodiment of the invention, the method includes the first subset of bit node messages that include a set of most contributing bit node messages, or the smallest bit node messages, or least reliable bit node messages.</p>
<p>In a preferred embodiment of the invention, the first algorithm is a Log- Likelihood-Ratio Belief Propagation algorithm and the second algorithm is the Mm-Sum algorithm. In another preferred embodiment of the invention, the second algorithm is a Lambda-Mm algorithm. In a further embodiment of the invention, the first algorithm is the Lambda-Mm algorithm and the second algorithm is the Mm-Sum algorithm, or vice-versa.</p>
<p>Aspects of the present invention provide the advantage of allowing for the selection of more computational resources, i.e. more complexity, to be allocated by using a more computationally demanding algorithm for using the most contributing bit nodes in the check node update process, but not discarding the contributions of the remaining bit nodes by allocating additional computational resources; and less complexity, using a less computationally demanding algorithm to update the check node messages using these bit nodes (and their messages) during each iteration of the check node update process within an LDPC iterative decoder.</p>
<p>Aspects of the present invention provide the advantage of allowing gradations of computational complexity to be allocated during the iterative decoding process, allowing the selection of the first and second algorithms, (and other algorithms), to be used for, or if necessary adapted during, the iterative decoding process. This provides an iterative decoding process that is able to adapt to the computational requirements of the hardware/software that the LDPC decoder requires for a given communications link for example, or for a given performance or quality of service.</p>
<p>In another aspect of the invention, an apparatus is provided for use in iteratively decoding LDPC codes adapted to implement the afore- said method and embodiments thereof. In a further aspect, a computer program is provided for iteratively decoding LDPC codes, which when executed, implements the aforesaid method and embodiments thereof.</p>
<p>These and other aspects, preferred features and embodiments will be described further, by way of example only, with reference to the accompanying drawings in which: Figure 1 illustrates an example of an LDPC decoder/encoder for use in a communications system.</p>
<p>Figure 2 illustrates an example of the relationship between the parity check matrix and the corresponding Tanner Graph.</p>
<p>Figure 3a illustrates the update of bit node messages for bit node i using check node messages sent from the set of check nodes M(i) to calculate each code bit of the codeword estimate.</p>
<p>Figure 3b illustrates the update of the check node messages using bit node messages sent to check nodej from the set of bit nodes J\f(j).</p>
<p>Figure 4 illustrates a plot of the non-linear function cD(x) for 0 < x = 6.</p>
<p>Figure 5 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the Belief Propagation and Mm-Sum algorithms.</p>
<p>Figure 6a illustrates simulation results depicting the bit-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 5.</p>
<p>Figure 6b illustrates simulation results depicting the frame-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 5.</p>
<p>Figure 7 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the Belief Propagation and Lambda-Mm algorithms.</p>
<p>Figure 8a illustrates simulation results depicting the bit-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 7.</p>
<p>Figure 8b illustrates simulation results depicting the frame-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 7.</p>
<p>Figure 9 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the Lambda-Mm and Mm Sum algorithms.</p>
<p>Figure 1 Oa illustrates simulation results depicting the bit-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 9.</p>
<p>Figure 1 Ob illustrates simulation results depicting the frame-error-rate performance versus signal-to-noise ratio (Eb/No) for the embodiment of Figure 9.</p>
<p>Specific Description of the Preferred Embodiments</p>
<p>Firstly, a description is given of a simplified communication system for use with the LDPC decoder. This is followed by a description of the use of bipartite Tanner graphs in iterative decoding of LPDC codes. Thereafter, iterative decoding algorithms such as the belief propagation, Mm-Sum and Lambda-Mm algorithms are described. The preferred embodiments of the iterative LPDC decoder are described where appropriate in relation to, but not limited to, the use of these three algorithms.</p>
<p>An illustration of a communications system 100 for use with a low-density parity-check (LDPC) decoder is shown in Figure 1. The communications system 100 includes a transmitter unit 104 that communicates a coded signal to a receiver unit 112. A brief overview of the communication system 100 is now given followed by a</p>
<p>detailed description of its components.</p>
<p>The transmitter unit 104 receives information bits from a data source 102, and has an LDPC code encoder 106 and a modulationltransmit unit 108. The coded signal is transmitted to the receiver unit 112 via a communications channel 110. The receiver unit 112 has the corresponding components necessary for receiving the coded signal; they are, a demodulator/matched filter unit 114, and an iterative LDPC decoder unit 116, where an estimate of the transmitted coded information is forwarded to a data sink 122.</p>
<p>The data source 102 generates information bits grouped into a block of k information bits denoted by ji = [p1,. . . , P,] for encoding at the LDPC encoder 106.</p>
<p>The LDPC encoder 106 has a code rate of R(. = and encodes the block of k information bits ji into a block of n code bits i.e. into a codeword i = [x1,. . .,x,,].</p>
<p>In general the codeword i is generated by I = . G, where G is the k by n generator matrix that defines the encoder 106. The codeword i is in the code set C, which is the set of all codewords of the encoder 106 defined by G. All elements and operations performed on p = [p1,. . ., = [xi,. . ., x,,], and G are defined, in this example, on Galois Field GF(2). The use of higher dimensions, e.g. GF( qr) where q and r are integers is possible. For example, codes with symbols from GF( 2') m > 0 are most widely used.</p>
<p>The codeword I for is modulated for transmission over the communications channel 110 as the n-dimensional transmitted signal i. For simplicity the communications channel 110 is assumed to be an additive white gaussian noise channel (AWGN). In this channel, the noise is a random variable having a normal distribution with 0 mean and variance of o.2 = N0 / 2, where N0 / 2 is the power spectral density. Any other communications channel may be applied such as multipath channels, etc. The transmitted signal i is corrupted by the AWGN giving the n-dimensional received signal F = F + n, where n is the n-dimensional vector of AWGN samples. The receiver unit 112 receives the noisy signal f and demodulates this signal by matched filtering 114 giving the n dimensional codeword y=[y',...,yfl'.</p>
<p>The LDPC decoder 116 has two units, the iterative decoder unit 118, which produces an estimated codeword I = [z1,.. .,z], and the error detection and decode unit 120, which checks then decodes the estimated codeword = [z1,. . .,;] into an estimate of the k information bits ji = [1u1,. . . sent by the transmitter. The information bits 1i are forwarded to the data sink 122.</p>
<p>As mentioned previously, the generator matrix G has a dual matrix H, which is the m = n -k by n parity check matrix where C HT =0. The parity check matrix H defines a set of m parity check equations that are useful in the decoding process. LDPC codes are defined by an m by n sparse parity check matrix in which there is a lower density of ones compared to zeros. In general, the goal is to find a sparse generator matrix and a sparse parity check matrix for encoding and decoding.</p>
<p>Once the iterative decoding unit 118 has estimated the codeword, the decoder unit 120 determines if i is a valid codeword in the code set by computing i.HT. If is a valid codeword then HT =,i.(G.H1')=O and hence can be decoded into k information bits i, otherwise an error is detected. An example of a parity check matrix H 200 (this is an example only using a random matrix H) is shown in Figure 2. The first row corresponds to the first parity check equation, which can be written as p1 = H13 where is an xor operation or modulo-2 addition in GF(2). If I = [1,0,1,1], then p1 = 1, p2 = 1, and p3 = 0 and an error would be detected, otherwise the codeword i is detected. Depending on the number of errors the decoder unit 120 can correct (which is a function of the coding type and rate used), the codeword i may be decoded correctly into i = [u1,. . ., J1,] or an error may be detected.</p>
<p>It is the task of the iterative decoder unit 118 to determine/find the closest or the most likely codeword that was transmitted for each received coded signal i and. The iterative decoding structures used in the LDPC decoder 116 are outlined in the following description. However, to understand the concept of iterative decoding, an overview of the bipartite Tanner graph that is used in the following iterative decoding algorithms is given.</p>
<p>Referring now to Figure 2, an example of a parity check matrix H 200 with its corresponding bipartite Tanner graph 202 is shown. The iterative decoder unit 118 uses soft decision iterative decoding algorithms that work on the concept of message passing (as described previously) between the bit nodes 204 (204a-204d) and check nodes 206 (206a-206c) over the edges of the bipartite Tanner graph 202.</p>
<p>In a preferred embodiment, the parity check matrix H 200 is stored in memory at the decoder 116. The bipartite Tanner Graph 202 described in Figure 2 is a representation for describing the following iterative decoding algorithms.</p>
<p>Bipartite Tanner graphs 202 consist of a plurality of bit nodes 204 (n bit nodes) and a plurality of parity check nodes 206 (m = n-k check nodes). These graphs 202 are defined by the parity check matrix H 200. Each bit node 204 (e.g. bit node 204c) is connected via graph edges (e.g. edges 208) to a set of check nodes (e.g. 206a and 206c) defined by M(i) = { J II H,, = i} for bit node i, where 1 = j = m and m is the number of check nodes 206. Similarly, each check node (e.g. check node 206b) is connected via graph edges (e.g. edges 210) to a set of bit nodes (e.g. 204b and 204d) defined by J\f(j) ={ H,, = i} for check node i, where 1 = i = n, and n is the number of bit nodes 204.</p>
<p>For example, Figure 2 illustrates that there are graph edges (e.g. 208 and 210) when there is a 1 in H, for example, the third column of H corresponds to all the edges that connect bit node 204c to the corresponding check nodes, i.e. check node 206a and 206d as there is a I in 113,1, where H,, denotes the element of column i and rowj. The number of l's in a column of H, or the column weight, indicates the number of graph edges and check nodes connected to that bit node. The number of 1 s in a row of H, or the row weight, indicates the number of graph edges and bit nodes connected to that check node. In addition, the set of bit nodes.AI(2) for check node 206b include bit nodes 204b and 204d and the set of check nodes M(3) for bit node 204c include the check nodes 206a and 206c. The edges 208, 210 of the bipartite Tanner graph 202 are used in the iterative decoding algorithms to pass (send) messages from the bit nodes 204 to the check nodes 206 and vice versa, as has been previously described and will be seen in more detail in the following</p>
<p>description.</p>
<p>The messages that are passed between check nodes 206 and bit nodes 204, and vice-versa, are stored as variables in a memory, for example a set of messages for each bit node i that are passed to a set of check nodes can be stored as variables in an array. The use of the terms, "pass" or "message passing" and the like, are for illustrative purposes to describe the iterative decoding algorithms that follow with reference to bipartite Tanner graphs.</p>
<p>BELIEF PROPAGATION ALGORITHM</p> <p>In the iterative BP algorithm, each bit node i passes a set of bit node
messages to the set of check nodes M(i), which are used in each check node to update the check node messages. Similarly, each check node] passes a set of check node messages to the set of bit nodes.A1(j). In each iteration of this algorithm, there are two main processes: the bit node update process illustrated in Figure 3a, which updates the bit node messages, and the check node update process illustrated in Figure 3b, which updates the check node messages. These messages are used to estimate the code bits and represent the probability or log-likelihood ratio thereof, reliability, or the belief, that the bit and check nodes have for the estimate of each code bit of the received codeword. In the log-likelihood ratio BP algorithm (LLR-BP) the messages that are passed between bit nodes and check nodes are based on probabilities, i.e. log-likelihood ratios thereof.</p>
<p>The Iterative LLR-BP algorithm is now described with reference to Figures 3aand3b.</p>
<p>Bit Node Update of LLR-BP Referring to Figure 3a, the bit node update process 300 for updating the bit node message T/ in each iteration is illustrated for bit node i 304 and the set of check nodes M(i), where is passed from bit node i 304 to check nodej 306. Bit node i 304 receives a priori information related to the channel (e.g. from channel measurements). For example, in an AWGN channel, bit node i 304 receives the log likelihood ratio of the i-th bit of the received code word estimate = [y1,.. . , y.,. . ., y,,], given by what is called the intrinsic information I, = --y, where a2 = N0 / 2 is the variance of the AWGN and N0 / 2 is the power spectral density.</p>
<p>In the first iteration, the intrinsic information is used to initialise the LLR-BP algorithm, where all bit node messages, 7, for I e J\1(j) and 1 = j m are passed from the set of bit nodes AI(j) (not shown) to check node j 306 are set to I, = --y,..</p>
<p>All check node messages, E,, for j M(i) and 1 = i = n, that are passed from the set of check nodes M(i) to bit node i 304 are set to zero.</p>
<p>In subsequent iterations, the message 7 is updated by summing, with the intrinsic information, the set of check node messages, E.. for f M(i)/J and I = I = n that are passed from the set of check nodes M(i) to bit node i 304 but excluding the check node message E1 passed from check node j 306 to bit node i 304 (not shown). The summation is given by: E,.1 (1) I E/tl(i)\j In each iteration, a posteriori probabilities (APP) or soft decisions T for bit node i 304 can be calculated by summing the intrinsic information I. and the set of check node messages, E1, for j E M(i), that are passed from the set of check nodes M(i) to bit node i 304. However, generally, these soft decisions could be calculated in the final iteration. The soft decisions are given by the following equation: (2) j'M(i) A hard decision z. of 7 is formed for each of the i bit nodes producing an estimate of the transmitted code word i. The parity check equations i* H1' are applied, to check if the estimated code word is in error. If not, the codeword is decoded accordingly. Otherwise, the error may be corrected depending on the error correction capabilities of the code.</p>
<p>The bit node update process requires few computational resources compared with the check node update process, which is now described.</p>
<p>Check Node Update of LLR-BP Referring to Figure 3b, the check node update process 302 is illustrated for updating check node message E1 in each iteration for check nodej 306 and the set of bit nodes.Af(j), where check node message E,, is passed from check node] 306 to bit node i 304. As the bit and check node messages are represented by log-likelihood ratios, it is the check node update process that contributes the greatest complexity to the computation of the LLR-BP algorithm. The computational burden comes from the use of the non-linear function, cI(.), used in each check node message update, which is given by: cD(x) = _log[tanh)] (3) An example of the shape of the function cD(.) is shown in Figure 4. As can be seen, small values of x produce large values of i(x).</p>
<p>In an attempt to simplify matters, the sign of and the magnitude of E, are computed separately. However, it is the magnitude of E,, that contributes towards the overall complexity of the check node update process.</p>
<p>Each check node message E1 is updated by summing ct(l) over the set of bit node messages passed from the set of bit nodes J/(j) to check node] 306, but excluding the bit node message 7, from bit node i 304 to check nodej 306. Finally, E, is updated by reapplying t(.) to the non-linear summation of The magnitude of E1 is given by: (4) iEJ\I(j)\i where the sign processing, or Sign( E..) is given by: Sign ( E,1) = fl Sign (T, .) (5) iEJ\I(j)\i The iterations of the LLR-BP algorithm continue until either a predetermined or maximum number of iterations are achieved, or until the algorithm has converged, that is the bit node messages have converged.</p>
<p>The complexity of the decoding process is primarily due to the non-linear function used on the bit node messages (which are log-likelihood ratios) in the check node update process.</p>
<p>Instead of the summation carried out in equations (1), (2) or (4) of the bit node update and check node update processes, the simplification in the following section may be used.</p>
<p>Bit Node Update of Simplified LLR-BP The bit node update process of the LLR-BP algorithm can be simplified by computing the soft decision T, first. Referring to Figure 3a, the bit node message, 1, (the extrinsic information), that is passed from bit node i 304 to check node] 306, is updated by: E31 IEM(i)\j = + E -E1 (6) jM(i) ) =7-E,1 This slightly reduces the complexity of the bit node update process as only the sum E.. is required to be computed once per bit node 1 304, for 1 = i = n. jM(i)</p>
<p>Check Node Update of simplified BP Referring to Figure 3b, the check node update process of the LLR-BP algorithm is simplified by computing the summation of all non-linear functions D(Tj) over I E J/(j). The processing of the Sign(E,) is kept the same, however, the magnitude EJ is simplified by the following: s= (7) ic.V(1) i EJV(j)\i (8) :A1(j) =r[s. -.A)] The non-linear function t(*) is performed in a full summation once per check node] 306. However, although this goes some way to reduce the complexity of the LLR-BP algorithm, it is still computationally intensive and difficult to implement on hardware having limited computing capabilities, for example mobile phones, TV receivers, or any other wireless transceiver.</p>
<p>MIN-SUM ALGORITHM</p>
<p>In order to reduce the complexity of the check node update process, the Mm-Sum algorithm has been proposed in the literature as a sub-optimal algorithm that can be used in the soft decision iterative decoding of LDPC codes instead of the LLR-BP algorithm.</p>
<p>The bit node update process in the Mm-Sum algorithm is the same as that of the LLR-BP algorithm. It is the check node update process that is greatly simplified at the cost of decreased performance.</p>
<p>For each check node j, the check node update process can be simplified by observing that small values of T result in large values of cD(T,,), see Figure 4.</p>
<p>Hence small values of T,1 contribute more to the summation of I cN(j)\i equations (4) or (8) of the LLR-BP algorithm than larger values of T,,. Small values of 7 represent a lower reliability bit node message and these are more important to the final decision than larger values of T,. The complexity of the call to the function I(T) can be avoided by considering the following approximation: (10) where n0 = Arg mm}. Substituting equation (10) into equation (4) or (8) and iE.N(j)\i exploiting the property ct[(x)] = x, then the update for EIJ is given by mm T. .. (11) IEJV(/)\i J) n0I In summary, for each check node j the update of the check node message E11 passed to bit node i, the smallest magnitude bit node message is selected from the set of bit nodes JV'(j) i that exclude the bit node i. Although the Mm-Sum algorithm achieves a dramatic reduction in complexity, it will be seen shortly that this also results in a dramatic reduction in bit and frame error rate performance as compared with the LLR-BP algorithm.</p>
<p>HYBRID LDPC DECODER (SET #1) It has been realised that valuable information is lost by using the Mm-Sum algorithm due to the discarded extrinsic information 7,Vi E JV(j)\ n0 for check nodej.</p>
<p>Instead, a preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising (in terms of computational resources or complexity) the most contributing bit node n0 (where Arg mm i', } to update the extrinsic check node messages being passed IE.N(j) to that bit node n0, over the less contributory bit nodes that update the remaining extrinsic check node messages EJ,,Vi E Jf(j)/n0.</p>
<p>In general, this is achieved by: * Identifying, for each check node j, 1 = j = m, the least reliable bit node by finding the smallest bit node message 1,,Vi e.Af(j) among the set of bit nodes.A/(j) ,that is n0 = argmin{T,1}.</p>
<p>IEA((j) * Selecting a first algorithm for use in the check node update process by, for example, allocating computational resources to calculate the check node message E11, that is passed from the check node j to bit node n0.</p>
<p>* Identifying the remaining set of bit nodes ViE J'sf(j)1n0.</p>
<p>* Selecting a second algorithm for use in the check node update process, for example allocating further computational resources, for the remaining bit nodes Vi E Al (j) / n0 to calculate the remaining check node messages E1,Vj e M(i).</p>
<p>The result is a check node update process that selects a first algorithm, where the allocation of more computational resources, complexity, and thus more accuracy for estimating the extrinsic check node message sent from a given check node to the bit node generating less reliable bit node messages, (also seen as generating the smallest or most contributing bit node messages), while still taking into account the contribution from other bit nodes in the check node update process by selection of a second algorithm.</p>
<p>Referring to Figure 5, a preferred embodiment of the invention for updating the check node message 500 is illustrated that uses the LLR-BP update rules, or the LLR-BP algorithm, as the first algorithm to update the check node passing the extrinsic check node message to the less reliable bit node 502. It uses the Mm-Sum update rules, or the Mm-Sum algorithm, as the second algorithm to update the check nodes passing extrinsic check node messages to the most reliable bit nodes (or remaining bit nodes).</p>
<p>This results in the following check node update process, for each check node j, I = j = m: * Identifying: a. the least reliable or smallest bit node 502 among.,V(j): n0 =argmmn{i,} iEJ(j) b. the remaining set of bit nodes Vie Al(j)/n0.</p>
<p>* Selecting a first and second algorithm for use in the check node update process for check node j 306 to update the extrinsic check node messages as follows: a. For the check node message being passed to bit node n0 502, use as the first algorithm the LLR-BP algorithm (either using equations (4) or (8)) i.e.: no: i.A1(j)\no b. For check node messages being passed to the remaining bit nodes use as the second algorithm the Mm-Sum algorithm: ViE AI(f) \;: = = Referring now to Figure 6a and 6b, a comparison of the error performance of this preferred embodiment is made with respect to the reference LLR-BP and Mm-Sum algorithms. The communication system model uses the IEEE 802.1 6e standard for communication and the LDPC code used is a rate R(. = LDPC code having a frame length of Zf=96 bits. The simulated bit error rate (B ER) vs SNR (E,, / N0) for this system is shown in Figure 6a and the simulated frame error rate (FER) vs SNR (Eb / N0) for this system is shown in Figure 6b.</p>
<p>Referring to Figure 6a, the performance gain of the preferred embodiment is 0.1dB at a BER=le-4 over that of the Mm-Sum algorithm. This improved performance gain is exceptional considering the only slight increase in computational complexity.</p>
<p>The computational complexity of the reference algorithms and this preferred embodiment is shown in Table I. The complexity of the preferred embodiment is given in the row labelled SET#1 and is shown to be less than the LLR-BP algorithm and closer to that of the Mm-Sum algorithm.</p>
<p>The complexity is given in terms of the degree of the check and bit nodes (also called variable nodes) denoted d1 and d respectively. The degree of a check node is the row weight of the parity check matrix H, which represents the number of bit nodes that pass bit node messages to that check node. Similarly, the degree of a bit (or variable node) is the column weight of the parity check matrix H, which is the number of check node messages that are passed to a bit node.</p>
<p>Referring to Figure 6b, the performance degradation of the preferred embodiment is only 0.2dB compared with that of the LLR-BP algorithm.</p>
<p>The bit node update process in this preferred embodiment is the same as that of the LLR-BP algorithm. It is clear that this preferred embodiment allows for low complexity decoding of LDPC codes with a performance gain increase of 0.1 dB with respect to the Mm-Sum algorithm, while maintaining a complexity close to it.</p>
<p>_________________________ ____________ LI ____________ ____________</p>
<p>s np Non-Linear M u31athn _____________ operathn ______________ Functhn Variab]eNodeMessages 2 d n m.(dL +i) Tentative Decoding LLR-BP 5*d-2 2*d Mm-Sum 5*d-4 Lam bda-M in.-+2'd ++(3d -2) 2.2 + I _____________________________________ 2. 2) __________________ __________________ A_Min* 5*d-2 d.+2 SET#1 5.d-4 d SET #2.+.(+fl (4. 3) d + 2 + I ___________________________________ 2. 2) _________________ _________________ SET#3 2*2 _________________ _________________ 2 2) _________________ _________________ (a]pha,O) 5d-4 2 Check Node _____________ ________________ _____________ _____________ Messages Corrected N in-5d.</p>
<p>(1,belm) S urn 5*dL (a]pha,beta) 2 2 + 1 2*2+ 1 (alpha,O) 2 l 2) ________________ _________________ Corn pensated 22 7 \ 22+ I (1,beta) --+2.1 d±I+3.d Lambda-Mini 2. 2) ___________ ___________ 22 1 7\ (alpha,beta) --+2d±I+3d 2 + 1 2.2+1 _________ 2 2) _________ _________ Fiarinni (2 + 6).d. -2 cr heorethal) ____________ ________________ ____________ ____________ 27*d-2 Energy F Jarinn2 _______________ ___________________ _______________ _______________ (Inprnentethn) 8*d-2 Cyc]es TABLE 1: Complexity of various embodiments and Iterative Decoders.</p>
<p>LAMBDA-MIN ALGORITHM</p>
<p>The Mm-Sum algorithm sacrifices performance with complexity, whereas the Lambda-Mm algorithm attempts to attain the performance of the LLR-BP algorithm through a higher complexity by using only a predetermined number, 2 >1, of bit node messages passed from the set of bit nodes Jsf(j) to check node j, during the check node update process.</p>
<p>The bit node update process in the Lambda-Mm algorithm is the same as that of the BP algorithm.</p>
<p>As with the Mm-Sum algorithm, for each check node j, the check node update process is simplified by observing that small values of result in large values of D(T,,), as seen in Figure 4. Hence small values of contribute more to the summation ct(i'. .) in equations (4) or (8) of the BP algorithm than larger 1EJ'.I(j)\i values of T. As opposed to the MmSum algorithm, the Lambda-Mm algorithm updates the extrinsic check node messages passed from the check nodes to bit nodes, by relying on just the 2 > 1 least reliable bit nodes, and hence the most contributing bit nodes are used within the aggregated sum of either equations (4) or (8).</p>
<p>In particular, the check node update process of check node j, 1 = j = m,is given as follows: * Identify a set of bit nodes, i.e. 2 bit nodes, that pass bit node messages to check node] by: Af2 (j) = {i e AI(j)/2 lowes1T,1} * The check node messages for check node j is updated by only using the.Af (j) bit node messages: T.j)] i ejV2(j)\i HYBRID LDPC DECODER (SET #2) It has been realised that valuable information is still lost by using the Lambda-Mm algorithm due to the discarded extrinsic information 7,Vi e.Af(j)\ .A12 (j) that is not used in the check node update process for check node m.</p>
<p>Instead, a preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising (in terms of computational resources or complexity), for each check node j, the most contributing bit node n0 (where n0 = Arg mm {T. }) to update the extrinsic check node message E11, being passed to bit node n0, over the less contributory bit nodes that update the remaining extrinsic check node messages E,,Vi E Jsf(j)/n0.</p>
<p>The result is the allocation of more computational resources, complexity, and thus more accuracy to the extrinsic check node message sent by the check node to the bit node that generates less reliable edge messages, while still taking into account the contribution from other bit nodes. This can result in either similar performance with less computational complexity or improved performance with slightly more computational complexity.</p>
<p>Referring to Figure 7, an illustration of a preferred embodiment of the invention for the check node update process 700 that uses the LLR-BP algorithm to update the check nodes 306 passing extrinsic check node messages to the least reliable bit node 502, and then use the Lambda-Mm algorithm to update the check nodes 306 passing extrinsic check node messages to a subset of the remaining bit nodes 1V (j), in this preferred embodiment, the ?. least reliable remaining bit nodes are used. However, the preferred embodiment is not limited to using only the remaining least reliable bit nodes, other selections of the remaining bit nodes are possible.</p>
<p>This preferred embodiment results in the following check node update process for each check node] 306: * Identify: a. The least reliable bit node 502 (bit node with smallest bit node message) among JV(j): n0 =argmin{T,1} iEJ%I(j) b. The (2-1) least reliable bit nodes among Af(j)\n0: (J) = {j E.A[(j)2 lowest T,4} * In the check node update process, for check node j 306, update the extrinsic check node messages as follows by selecting a first and second algorithm: a. For the check node message being passed to bit node n0 502 use LLR-BP algorithm as the first algorithm: n: i.V(j)\n0 b. For check node messages being passed to the remaining bit nodes 304 use Lambda-Mm algorithm as the second algorithm: ViE Af(])\n0 EJ1 = > I EJV (j)v Referring now to Figures 8a and 8b, a comparison of the error performance is made with respect to the reference Lambda-Mm and Mm-Sum algorithms and this preferred embodiment, which is labelled as SET #2. The communication system model uses the IEEE 802.1 6e standard and an LDPC encoder/decoder is used having an LDPC code of rate R(. = with a frame length of Zf=96 bits. The Lambda-Mm algorithm is shown for X=3 and 4. However, the preferred embodiment uses 2 = 3.</p>
<p>The simulated bit error rate (BER) vs SNR (Eb / N0) for this system is shown in Figure 8a and the simulated frame error rate (FER) vs SNR (Eb / N0) for this system is shown in Figure 8b.</p>
<p>Referring to Figure 8a, the performance gain of the preferred embodiment is at least 0.3dB at a BER=le-4 over that of the Mm-Sum algorithm -in fact, the performance is the same as that of Lambda-Mm algorithms. For lower BER<le-4, the performance gain is actually between both the Lambda-Mm algorithms with X=3 and 4. The advantage is that the preferred embodiment provides an intermediate solution both in terms of performance and complexity as seen from the complexity analysis in Table 1, see row labelled SET #2.</p>
<p>The computational complexity of the reference algorithms and this preferred embodiment is shown in Table 1. The complexity of the preferred embodiment is given in the row labelled SET#2. This is shown to be less than the complexity of the BP algorithm and closer in complexity to that of the Lambda-Mm algorithms. For the same 2 used in both the preferred embodiment and the Lambda-Mm algorithm, there is in fact less complexity in the number of simple operations, while only slightly more complexity in the calls to D (*). However, as seen in the BER and FER shown in Figures 8a and 8b, the preferred embodiment outperforms the same 2 Lambda-Mm algorithm for a given BER and FER.</p>
<p>The bit node update process in this preferred embodiment is the same as that of the BP algorithm.</p>
<p>It is clear, that this preferred embodiment allows for lower complexity decoding of LDPC codes while having a performance close to the LLR-BP algorithm. Moreover, there is more granularity in terms of complexity and performance. That is, there is a reduced gap in the performance between 2 -mm and (2 +1)-mm of the preferred embodiment and a reduced increase in complexity, as opposed to the granularity in terms of complexity and performance between the 2-mm and (2+1)-min Lambda-Mm algorithms.</p>
<p>This allows for the fine tuning of the complexity or gradations of computational complexity to be allocated before or during the iterative decoding process, allowing the selection of the first and second algorithms, (and other algorithms), to be used for, or if necessary adapted during, the iterative decoding process. This provides an iterative decoding process that is able to adapt to the computational requirements of the hardware/software that the LDPC decoder requires for a given communications link/channel for example, or for a given performance or quality of service.</p>
<p>Furthermore, the Lambda-Mm algorithm for the same X as in the preferred embodiment requires an increased amount of storage/memory.</p>
<p>HYBRID LDPC DECODER (SET #3) A further preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising, in terms of computational resources or complexity, the set of most contributing bit nodes A/ (I) = {i E J'f(j)/2 lowestT,1}, to update the extrinsic check node messages E,1 = ci t (r.. ) being passed to those bit nodes over the less contributory bit nodes to update the remaining extrinsic check node messages EJ1,Vi E.V(j)\ 2 (J) The result is the allocation of more computational resources, complexity, and thus more accuracy to the extrinsic check node message sent by the check node to the bit node that generates less reliable bit node messages (i.e. the smallest magnitude bit node messages). While still taking into account the contribution from other bit nodes by selection of a second algorithm. This can result in intermediate performance with reduced computational complexity compared with the LLR-BP and Lambda-Mm algorithms.</p>
<p>Referring to Figure 9, an illustration is shown of a preferred embodiment of the invention in which check node update process 900 uses, as the first algorithm, the Lambda-Mm algorithm to update the check nodes 306 passing extrinsic check node messages to the least reliable bit nodes 304, which are the 2 least reliable bit nodes.</p>
<p>In addition, the preferred embodiment uses, as the second algorithm, the Mm-Sum algorithm to update the check nodes 306 passing extrinsic check node messages to the remaining bit nodes.</p>
<p>This results in the following check node update process for each check node j, 1 =j =m: * Identify: a. The 2 least reliable bit nodes (bit nodes passing the smallest bit node messages to check nodej 306) among AI(j): A/ (j) = {i E Af(J)/2 iowesiT,,} b. The least reliable bit node among J'(J)'J'2 (I): n0= Argmin{T} iEAf(j)\J%.IA(j) * In the check node update process, update the extrinsic check node messages using a first and second algorithm as follows: a. For check nodes messages being passed to the set of bit nodes.N (j) use the Lambda-Mm algorithm as the first algorithm: V1EJVA(j): I EJ(j)\i b. For check node messages being passed to the remaining bit nodes use the Mm-Sum algorithm as the second algorithm: Vi E.A1(j) \ J'1 (i) o = = iJV(/)\i = Referring now to Figures 1 Oa and 1 Ob, a comparison of the error performance is made with respect to the reference Lambda-Mm and Mm-Sum algorithms and this preferred embodiment, which is labelled SET #3. The communication system model uses the IEEE 802.1 6e standard and an LDPC encoder/decoder having an LDPC code of rate R = with a frame length of Zf=96 bits. The Lambda-Mm algorithm is shown for 2=3 and 4. The preferred embodiment uses 2 =3.</p>
<p>The simulated BER vs SNR (Eb / N0) for this system is shown in Figure 1 Oa and the simulated FER vs SNR (Eb / N0) for this system is shown in Figure 1 Ob.</p>
<p>Referring to Figure 1 Oa, the performance gain of the preferred embodiment over the Mm-Sum algorithm is 0.15db for a BER of le-4. The performance is between that of the Mm-Sum algorithm and the Lambda-Mm algorithms. In addition, as the SNR increases, the performance of the preferred embodiment approaches that of the Lambda-Mm algorithm for 2=3. Similar FER performance is attained as can be seen in Figure lOb. The preferred embodiment is an intermediate solution between the full Lambda-Mm and Mm-Sum algorithms both in term of performance and complexity (see Table 1, for row labelled SET #3).</p>
<p>Hence, the preferred embodiment provides a low-complexity iterative LDPC decoder that gives a good trade-off between performance and complexity, due to its complexity and performance when compared with the existing Lambda-Mm and Mm-Sum algorithms.</p>
<p>It will be clear to the skilled person that other iterative algorithms may be combined with any of the preferred embodiments or combinations thereof In addition, any combinations of all three LLR-BP, Lambda-Mm and Mm-Sum algorithms can be applied to the present invention, where any of these algorithms use the least reliable bit nodes while the remaining algorithms operate on the remaining subsets of the most reliable bit nodes.</p>
<p>It will be clear to the skilled person that, other than the modulation and RF components, the blocks making up the embodiment at the transmitter and receiver units comprise suitably programmed microprocessors, digital signal processor devices (DSPs), field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs) executing signal processing. Separate dedicated hardware devices or software routines may be used for the LDPC encoder and decoder operations (specifically the iterative algorithms applied in the soft decision iterative LDPC decoder for estimating the received codeword) in transceiver, transmitter or receiver units.</p>
<p>The term "messages" can be variables (or temporary variables) that are stored in memory in such a fashion to represent the value of the message and the relationship between the messages passed from check nodes to bit nodes, and vice-versa. The use of the terms, "pass" or "message passing" and the like, can be used to describe the accessing and performing operations on these variables stored in memory (and the like).</p>
<p>It will be apparent from the foregoing that many other embodiments or variants of the above are possible. The present invention extends to any and all such variants, and to any novel subject matter or combination thereof disclosed in the foregoing.</p>

Claims (1)

  1. <p>CLAIMS</p>
    <p>1. A method for iteratively decoding low density parity check codes comprising, for each iteration, the steps of: updating a plurality of bit nodes; and updating a plurality of check nodes; wherein for each bit node a set of check node messages are used for updating that bit node, and for each check node a set of bit node messages are used for updating each check node; characterised in that for each check node update the method further includes the steps of: identifying a first subset of bit node messages from the set of bit node messages for use in updating the each check node; selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and selecting a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p>
    <p>2. The method of claim 1 wherein the first subset of bit node messages comprise a set of most contributing bit node messages.</p>
    <p>3. The method of claims 1 or 2, wherein the first algorithm for use in the check node update process includes the steps of: forming a summation, over the first subset of bit node messages, of a function having as an operand each bit node message in the first subset of bit node messages; and updating the first subset of check node messages, where for each check node message in the first subset of check node messages the said each check node message is updated by the function having as an operand the summation with the contribution of the respective bit node message corresponding to each said check node message in the first subset of bit node messages removed.</p>
    <p>4. The method of claim 3, wherein the function is a non-linear function defined by a natural logarithm of a hyperbolic tangent of the operand divided by two.</p>
    <p>5. The method of any of claims 1 to 4 wherein the first algorithm is a Log-Likelihood-Ratio Belief Propagation algorithm.</p>
    <p>6. The method of any of claims I to 5, wherein the second algorithm for use in the check node update process includes the steps of: identifying the most contributing bit node message from the second subset of bit node messages; and updating the second subset of check node messages with the most contributing bit node message.</p>
    <p>7. The method of any of claims I to 6, wherein the second algorithm is a Mm-Sum algorithm.</p>
    <p>8. The method of any of claims I to 5, wherein the second algorithm for use in the check node update process includes the steps of: identifying from the second subset of bit node messages, a reduced subset of the one of more most contributing bit node messages; and update the first subset of check node messages using the reduced subset of the one or more most contributing bit node messages.</p>
    <p>9. The method of any of claims 1 to 5 or 8, wherein the second algorithm is a Lambda-Mm algorithm.</p>
    <p>10. The method of any of claims 1 to 2 or 5 to 7, wherein the first algorithm for use in updating the check node update process includes the steps of: identifying from the first subset of bit node messages, a reduced subset of the most contributing bit node messages; update the first subset of check node messages using the reduced subset of most contributing bit node messages.</p>
    <p>11. The method of claims 1 or 2 wherein the first algorithm is a Lambda-Mm algorithm and the second algorithm is a Mm-Sum algorithm.</p>
    <p>12. The method of claims 1 or 2 wherein the first algorithm is a Mm-Sum algorithm and the second algorithm is a Lambda-Mm algorithm.</p>
    <p>13. The method of any preceding claim further including the steps of: adjusting the check node update process, by selecting the first and second algorithm in accordance with a required accuracy or LDPC code performance requirement.</p>
    <p>14. An apparatus for use in iteratively decoding LDPC codes adapted to implement the method of any of claims 1 to 13.</p>
    <p>15. A computer program for iteratively decoding LDPC codes, which when executed, implements the method of any of claims ito 14.</p>
    <p>16. A hybrid check node apparatus for use in iteratively decoding low density parity check codes comprising, for each iteration and each check node: means for identifying a first subset of bit node messages from a set of bit node messages for use in updating each check node; means for selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; means for identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and means for selecting a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p>
    <p>17. The hybrid check node apparatus of claim 16 wherein the first subset of bit node messages comprise a set of most contributing bit node messages.</p>
    <p>18. The hybrid check node apparatus of claims 16 or 17, further comprising means for implementing the first algorithm for use in the check node update process that includes means for: forming a summation, over the first subset of bit node messages, of a function having as an argument each bit node message in the first subset of bit node messages; and updating the first subset of check node messages, where for each check node message in the first subset of check node messages the said each check node message is updated by the function having as an argument the summation with the contribution of the respective bit node message corresponding to each said check node message in the first subset of bit node messages removed.</p>
    <p>19. The hybrid check node apparatus of claim 18, wherein the function is a non-linear function defined by a natural logarithm of a hyperbolic tangent of the operand divided by two.</p>
    <p>20. The hybrid check node apparatus of any of claims 16 to 19, further comprising means for implementing the first algorithm as a Log-Likelihood-Ratio Belief Propagation algorithm.</p>
    <p>21. The hybrid check node apparatus of any of claims 16 to 20, further comprising means for implementing the second algorithm for use in the check node update process that includes means for: identifying the most contributing bit node message from the second subset of bit node messages; and updating the second subset of check node messages with the most contributing bit node message.</p>
    <p>22. The hybrid check node apparatus of any of claims 16 to 21, further comprising means for implementing the second algorithm as a Mm- Sum algorithm.</p>
    <p>23. The hybrid check node apparatus of any of claims 16 to 20, further comprising means for implementing the second algorithm for use in the check node update process including means for: identifying from the second subset of bit node messages, a reduced subset of the one or more most contributing bit node messages; and update the first subset of check node messages using the reduced subset of the one or more most contributing bit node messages.</p>
    <p>24. The hybrid check node apparatus of any of claims 16 to 20 or 23, wherein the second algorithm is a Lambda-Mm algorithm.</p>
    <p>25. The hybrid check node apparatus of any of claims 16 or 17 or 21 or 23, further comprising means for implementing the first algorithm for use in updating the check node update process including means for: identifying from the first subset of bit node messages, a reduced subset of the most contributing bit node messages; update the first subset of check node messages using the reduced subset of most contributing bit node messages.</p>
    <p>26. The hybrid check node apparatus of claims 16 or 17, further comprising means for implementing the first algorithm as a Lambda-Mm algorithm and the second algorithm as a Mm-Sum algorithm.</p>
    <p>27. The hybrid check node apparatus of claims 16 or 17, further comprising means for implementing first algorithm as a Mm-Sum algorithm and the second algorithm as a Lambda-Mm algorithm.</p>
    <p>28. The hybrid check node apparatus of any of claims 16 to 27, further including means for adjusting the check node update process, by selecting the first and second algorithm in accordance with a required accuracy or LDPC code performance requirement.</p>
    <p>29. An LDPC decoder substantially as hereinbefore described having reference to any of Figures ito 10.</p>
    <p>30. A method of updating check nodes suitable for a process of decoding low density parity check codes, the method comprising the steps of: identifying a first subset of bit node messages from the set of bit node messages for use in updating each check node; using a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and using a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p>
GB0521858A 2005-10-26 2005-10-26 Decoding low-density parity check codes Expired - Fee Related GB2431833B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0521858A GB2431833B (en) 2005-10-26 2005-10-26 Decoding low-density parity check codes
US11/586,759 US8006161B2 (en) 2005-10-26 2006-10-26 Apparatus and method for receiving signal in a communication system using a low density parity check code
KR1020060104732A KR101021465B1 (en) 2005-10-26 2006-10-26 Apparatus and method for receiving signal in a communication system using a low density parity check code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0521858A GB2431833B (en) 2005-10-26 2005-10-26 Decoding low-density parity check codes

Publications (3)

Publication Number Publication Date
GB0521858D0 GB0521858D0 (en) 2005-12-07
GB2431833A true GB2431833A (en) 2007-05-02
GB2431833B GB2431833B (en) 2008-04-02

Family

ID=35515778

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0521858A Expired - Fee Related GB2431833B (en) 2005-10-26 2005-10-26 Decoding low-density parity check codes

Country Status (1)

Country Link
GB (1) GB2431833B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8196005B2 (en) * 2006-03-29 2012-06-05 Stmicroelectronics N.V. Method and device for decoding LDPC encoded codewords with a fast convergence speed
CN102611459A (en) * 2011-01-19 2012-07-25 Jvc建伍株式会社 Decoding device and decoding method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9793923B2 (en) * 2015-11-24 2017-10-17 Texas Instruments Incorporated LDPC post-processor architecture and method for low error floor conditions
CN114421972B (en) * 2022-01-27 2022-11-22 石家庄市经纬度科技有限公司 Decoding method of multi-system LDPC code

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050154957A1 (en) * 2004-01-12 2005-07-14 Jacobsen Eric A. Method and apparatus for decoding forward error correction codes
US20050229087A1 (en) * 2004-04-13 2005-10-13 Sunghwan Kim Decoding apparatus for low-density parity-check codes using sequential decoding, and method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050154957A1 (en) * 2004-01-12 2005-07-14 Jacobsen Eric A. Method and apparatus for decoding forward error correction codes
US20050229087A1 (en) * 2004-04-13 2005-10-13 Sunghwan Kim Decoding apparatus for low-density parity-check codes using sequential decoding, and method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8196005B2 (en) * 2006-03-29 2012-06-05 Stmicroelectronics N.V. Method and device for decoding LDPC encoded codewords with a fast convergence speed
CN102611459A (en) * 2011-01-19 2012-07-25 Jvc建伍株式会社 Decoding device and decoding method

Also Published As

Publication number Publication date
GB2431833B (en) 2008-04-02
GB0521858D0 (en) 2005-12-07

Similar Documents

Publication Publication Date Title
US8006161B2 (en) Apparatus and method for receiving signal in a communication system using a low density parity check code
US7500172B2 (en) AMP (accelerated message passing) decoder adapted for LDPC (low density parity check) codes
Kienle et al. Low complexity stopping criterion for LDPC code decoders
US10419027B2 (en) Adjusted min-sum decoder
WO2017086414A1 (en) Quantized belief propagation decoding of ldpc codes with mutual information-maximizing lookup tables
CN101107782A (en) ECC decoding method
Zimmermann et al. Reduced complexity LDPC decoding using forced convergence
Chen et al. Performance analysis of practical QC-LDPC codes: From DVB-S2 to ATSC 3.0
Sridharan et al. Convergence analysis for a class of LDPC convolutional codes on the erasure channel
KR20200127783A (en) Appartus and method for decoding of low-density parity check codes in wireles communication system
Lian et al. Adaptive decoding algorithm with variable sliding window for double SC-LDPC coding system
GB2431833A (en) Decoding low-density parity-check codes using subsets of bit node messages and check node messages
US8019020B1 (en) Binary decoding for correlated input information
Nam-Il et al. Early termination scheme for 5G NR LDPC code
Zhang et al. Protograph-based low-density parity-check Hadamard codes
Andreadou et al. Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) codes for deep space and high data rate applications
Olaniyi et al. Machine Learning for Channel Coding: A Paradigm Shift from FEC Codes
GB2431836A (en) Decoding low-density parity-check codes using subsets of bit node messages and check node messages
Fang et al. Superposition Construction of Globally-Coupled LDPC Codes for MIMO Communication
GB2431835A (en) Decoding low-density parity-check codes using subsets of bit node messages and check node messages
GB2431834A (en) Decoding low-density parity-check codes using subsets of bit node messages and check node messages
Sy et al. Extended non-binary low-density parity-check codes over erasure channels
Baviskar et al. LDPC based error resilient audio signal processing for wireless communication
Ruan et al. Near optimal decoding of polar-based turbo product codes
Zhong et al. A Classified Normalized BP-Based Algorithm with 2-Dimensional Correction for LDPC Codes.

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20191026