GB2431834A  Decoding lowdensity paritycheck codes using subsets of bit node messages and check node messages  Google Patents
Decoding lowdensity paritycheck codes using subsets of bit node messages and check node messages Download PDFInfo
 Publication number
 GB2431834A GB2431834A GB0521859A GB0521859A GB2431834A GB 2431834 A GB2431834 A GB 2431834A GB 0521859 A GB0521859 A GB 0521859A GB 0521859 A GB0521859 A GB 0521859A GB 2431834 A GB2431834 A GB 2431834A
 Authority
 GB
 United Kingdom
 Prior art keywords
 gt
 lt
 bit
 subset
 check
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Withdrawn
Links
 238000004422 calculation algorithm Methods 0 abstract claims description 145
 238000000034 methods Methods 0 abstract claims description 84
 230000000875 corresponding Effects 0 abstract claims description 18
 238000004590 computer program Methods 0 claims description 2
 238000009740 moulding (composite fabrication) Methods 0 claims 2
 239000011159 matrix materials Substances 0 description 25
 238000004891 communication Methods 0 description 8
 230000015654 memory Effects 0 description 6
 230000002829 reduced Effects 0 description 6
 238000004088 simulation Methods 0 description 6
 230000001702 transmitter Effects 0 description 6
 230000001603 reducing Effects 0 description 5
 238000006722 reduction reaction Methods 0 description 4
 230000003595 spectral Effects 0 description 3
 230000000996 additive Effects 0 description 2
 239000000654 additives Substances 0 description 2
 230000001721 combination Effects 0 description 2
 230000003247 decreasing Effects 0 description 2
 230000001976 improved Effects 0 description 2
 230000000051 modifying Effects 0 description 2
 229920001276 Ammonium polyphosphate Polymers 0 description 1
 230000037010 Beta Effects 0 description 1
 238000007792 addition Methods 0 description 1
 238000004458 analytical methods Methods 0 description 1
 230000015556 catabolic process Effects 0 description 1
 238000006731 degradation Methods 0 description 1
 230000004059 degradation Effects 0 description 1
 238000009826 distribution Methods 0 description 1
 238000001914 filtration Methods 0 description 1
 230000001965 increased Effects 0 description 1
 238000004310 industry Methods 0 description 1
 230000001788 irregular Effects 0 description 1
 238000005259 measurements Methods 0 description 1
 230000002104 routine Effects 0 description 1
 238000003860 storage Methods 0 description 1
 239000010936 titanium Substances 0 description 1
Classifications

 H—ELECTRICITY
 H03—BASIC ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
 H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
 H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
 H03M13/1102—Codes on graphs and decoding on graphs, e.g. lowdensity parity check [LDPC] codes
 H03M13/1105—Decoding
 H03M13/1111—Softdecision decoding, e.g. by means of message passing or belief propagation algorithms
 H03M13/1117—Softdecision decoding, e.g. by means of message passing or belief propagation algorithms using approximations for check node processing, e.g. an outgoing message is depending on the signs and the minimum over the magnitudes of all incoming messages according to the minsum rule

 H—ELECTRICITY
 H03—BASIC ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
 H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
 H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits

 H—ELECTRICITY
 H03—BASIC ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/35—Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics

 H—ELECTRICITY
 H03—BASIC ELECTRONIC CIRCUITRY
 H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
 H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
 H03M13/65—Purpose and implementation aspects
 H03M13/6502—Reduction of hardware complexity or efficient processing

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
 H04L1/00—Arrangements for detecting or preventing errors in the information received
 H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
 H04L1/0045—Arrangements at the receiver end
 H04L1/0047—Decoding adapted to other signal detection operation
 H04L1/005—Iterative decoding, including iteration between signal detection and decoding operation

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
 H04L1/00—Arrangements for detecting or preventing errors in the information received
 H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
 H04L1/0056—Systems characterized by the type of code used
 H04L1/0057—Block codes
Abstract
Description
<p>Decoding LowDensity ParityCheck Codes The present invention relates
to decoding LowDensity ParityCheck (LDPC) error control codes. It relates to decoders for use in digital systems, such as transmitters, receivers and transceivers using LDPC codes, and to methods of, and apparatus for, transmitting and receiving a coded signal, and to a signal thus produced.</p>
<p>Background</p>
<p>Lowdensity paritycheck (LDPC) codes are error control codes that have the ability to provide impressive performance gains, near the Shannon capacity (or near Shannon limit error performance), for communications systems. Due to their performance, these codes are being included in future industry standards such as DVB52, IEEE 802.16e, or IEEE 802.1 in (for example, the TGnSync & WWise IEEE 802.lln proposal at http://www.wwise.org/technicalproposal.htm).</p>
<p>LDPC codes belong to the family of linear block codes. The encoder for a rate R(. = block code has k information bits denoted by jt = [p1,. . . , p] that are encoded into n code bits denoted by = [x1,.. .,x], called a codeword. This codeword is generated/encoded by the linear operation I = . G, where G is a k by n generator matrix that defines the encoder. The codeword I is said to be in the code set C, which is the set of all codewords for the encoder defined by G. Unless otherwise stated, for simplicity all elements and operations of the encoder/decoder are defined on Galois Field GF(2), other higher dimensional Galois Fields may also be used.</p>
<p>The generator matrix G has a dual matrix H, which is an m = n k by n parity check matrix. These matrices, G and H, are orthogonal, i.e. G. H" = 0. The parity check matrix H defines a set of m parity check equations that are useful in the decoding process. The decoder can determine whether a received codeword estimate i is a valid codeword I = . G in the code set C by computing * HT before decoding the codeword estimate. Hence, is a valid codeword i when H' = (EL G) H' =0, otherwise if H' != 0 then an error is detected. The decoder should be able to find the closest or the most likely codeword that was transmitted for each received codeword estimate. For more details on decoding of general linear block codes see Shu Lin and Daniel Costello "Error Control Coding: Fundamentals and Applications ", Prentice Hall.</p>
<p>LDPC codes are defined by an m by n sparse parity check matrix in which there is a lower density of ones compared to zeros. In general, the goal is to find a sparse generator matrix and a sparse parity check matrix for encoding and decoding.</p>
<p>These codes lend themselves to iterative decoding structures to realise nearShannon capacity performance. The impressive performance of LDPC codes is realised by applying soft decision iterative decoding techniques such as the belief propagation (BP) algorithm as described by Jinghu Chen and Marc P.C. Fossorier, "Decoding LowDensity Parity Check codes with Normalized APPBased Algorithm", 2001, IEEE, or Ajay Dholakia et. a!, "CapacityApproaching Codes: Can They Be Applied to the Magnetic Recording Channel?". IEEE Communications Magazine, pp 122130, February 2004, and Engling Yeo et. al., "Iterative Decoder Architectures", IEEE Communication Magazine, pp 132140, August 2003.</p>
<p>The soft decision iterative decoding algorithms are based on the BP algorithm work on the concept of bipartite Tanner graphs as described in Frank R Kschischang, "Codes Defined on Graphs", IEEE Communication Magazine, pp 118125, August 2003 and Tom Richardson, "The Renaissance of Gallager's LowDensity ParityCheck Codes", IEEE Communications Magazine, pp 126131, August 2003.</p>
<p>A bipartite Tanner graph is a set of graph vertices decomposed into two disjoint sets where these disjoint sets of vertices are a plurality of bit nodes (also called variable nodes or symbol nodes) and a plurality of parity check nodes, hereinafter called check nodes. The graph is bipartite as no two graph vertices within the same set are adjacent. That is, bit nodes are not connected via graph edges to each other, similarly with the check nodes.</p>
<p>Bipartite Tanner graphs are defined by the m by n parity check matrix H. Each bit node is connected via graph edges to a set of check nodes defined by M(i) ={ j II H, = i} for bit node i, where 1 = j = m, and m is the number of check nodes (or number of rows in H). Similarly, each check node is connected via graph edges to a set of bit nodes defined by Af(j) ={ i H,, = i} for check node], where 1 = j = n, and n is the number of bit nodes (or the number of columns of H and the number of code bits per codeword).</p>
<p>The iterative decoding algorithms that are based on the concept of bipartite Tanner graphs are called message passing algorithms. Conceptually, these algorithms use the structure of the Tanner graph to "pass" messages, along graph edges, from the bit nodes to check nodes and vice versa. Typically, in implementing an iterative algorithm, the messages can be variables stored in a memory, for example a set of messages for each bit node 1 that are passed to a set of check nodes can be stored as variables in an array, and operated on as such. The use of the terms, "pass" or "message passing" and the like, are for illustrative purposes to describe the iterative decoding algorithms that follow with reference to Tanner graphs.</p>
<p>The messages are used to iteratively estimate the code bits of the codeword to be estimated. In one iteration of an iterative decoding algorithm, each bit node passes messages, called bit node messages, that represent an estimate of its respective code bit to all neighbouring check nodes (i.e. those check nodes associated/connected with that bit node via edges). Each neighbouring check node also receives from other neighbouring bit nodes additional bit node messages. The neighbouring check nodes passes back to the original bit node a combination of all these bit node messages via messages called check node messages. This process occurs for every bit node, hence each bit node receives a check node message from each of its neighbouring check nodes and can combine these to form an estimate of its respective code bit. Overall, in each iteration an estimate of a codeword is produced.</p>
<p>Although Tanner graphs can operate on messages based in the Galois Field GF(2), in soft decision iterative decoding algorithms these messages typically represent probabilities or loglikelihood ratios thereof. Hence these messages and operations are not limited to being defined on GF(2), but can be defined on other fields such as, for example, the field of real numbers JR.</p>
<p>More specifically in terms of LDPC codes, each bit node i passes a set of bit node messages to the set of check nodes M(i), which are used in each check node to update the check node messages. Similarly, each check nodej passes a set of check node messages to the set of bit nodes.Af(j). In each iteration of these algorithms, there are two main processes called the bit node update process, which updates the bit node messages, and the check node update process, which updates the check node messages.</p>
<p>The bit node and check node messages are considered the information obtainable from the received codeword about the transmitted codeword, which is called extrinsic bit node and check node messages (these terms may hereinafter be used interchangeably). In essence, these messages represent the reliability or the belief for the estimate of the n code bits of the transmitted/received codeword.</p>
<p>In the bit node update process, the bit nodes receive a priori information related to the channel. Hence, in an additive white Gaussian noise (AWGN) channel, for example, bit node i receives the log likelihood ratio of the ith bit of the received code word estimate, given by the intrinsic information I, = 4 y1, where a2 = N0 / 2 is the variance of the AWGN and N0 /2 is the power spectral density. This is used to initialise the BP algorithm (or for that matter most of the other algorithms as well).</p>
<p>The bit node message that is passed from bit node ito check nodej is denoted as T,,. This message T, is updated by summing the set of check node messages passed from the set of check nodes M(i) to bit node i, but excluding the check node message passed from check nodej to bit node i. This update process requires fewer computational resources (or has a lower complexity) compared with the check node update process.</p>
<p>It is the check node update process that contributes the greatest complexity to the computation of the BP algorithm. The computational burden comes from a nonlinear function, cD(.) (defined below in equation (3)), used in each check node message update.</p>
<p>In the check node update, the check node message that is passed from check node] to bit node i is denoted by E,,. Each check node message E, is updated by summing cD(T,,) over the set of bit node messages passed from the set of bit nodes J'f(]) to check node], but excluding the bit node message T passed from bit node i to check node]. Finally, E, is updated by reapplying c1(.) to the nonlinear summation of ct(T,).</p>
<p>In an attempt to simplify matters, the sign of E1 and the magnitude of are computed separately. However, it is the magnitude of E1 that contributes towards the overall complexity of the check node update process.</p>
<p>The iterations of the BP algorithm continue until either a predetermined or maximum number of iterations is achieved, or until the algorithm has converged, that is the bit node messages have converged. A soft decision (denoted T,) for bit node i is calculated by summing the intrinsic infonnation I, and the set of check node messages that are passed from the set of check nodes M(i) to bit node i. A hard decision is formed, for each of the soft decisions 7' producing z1, giving the estimate codeword = [z,.. .,z]. The parity check equations are applied, I H1', to check if the estimated codeword is in error and, depending on the error correction capability of the code, the codeword can decoded accordingly giving the information estimate The complexity of the decoding process is primarily due to the nonlinear function used in the check node update process. Hence, reducing the complexity of this process is the focus of current research in the field of LDPC Codes. Several examples of reduced complexity LDPC decoders are disclosed in US 2005/020427 1 Al and US 2005/0138519 Al.</p>
<p>A simplified LDPC decoding process is disclosed in US 2005/0204271 Al, which describes an iterative LDPC decoder that uses a "serial schedule" for processing the bit node and check node messages. Schedules are updating rules indicating the order of passing messages between nodes in the Tanner graph.</p>
<p>Additionally, an approximation to the BP algorithm called the MmMax algorithm is used in the iterative decoding process in order to reduce the complexity of the check node update process.</p>
<p>This is achieved by using a property of the nonlinear function, (1)0, in which small values of contribute the most to the summation of VQ. So, for each check node], the smallest magnitude bit node message is selected from the set of bit node messages passed to check node]. Only this bit node message is used in the approximation for updating check node]. That is, the least reliable bit node message is used and the most reliable bit node messages are discarded.</p>
<p>However, although a reduction in complexity is achieved, the result is a dramatic reduction in bit and frame error rate performance compared to the BP algorithm. This is due to the fact that the discarded extrinsic information has not been used in the check node update.</p>
<p>US 2005/0138519 Al discloses an LDPC decoding process that simplifies the iterative BP algorithm by using a reduced set of bit node messages, i.e. a predetermined number, 2 > 1, of bit node messages from the set of bit nodes JV(]) that are passed to check node] are used in the check node update process. Hence, only the bit nodes that pass the bit node messages with the lowest magnitude levels (smallest or least reliable bit node messages) to check node] are identified and used in the check node update process.</p>
<p>The BP algorithm is then applied to this reduced set of bit node messages in the check node update process. This simplification results in what is called the LambdaMm algorithm, for more details see F. Guilloud, E. Boutillon, J.L. Danger, "LambdaMm Decoding Algorithm of Regular and Irregular LDPC Codes", 3nd International Symposium on Turbo Codes and Related Topics, Brest, France, pp 451 454, Sept. 2003. However, there is only a slight decrease in complexity compared with the BPalgorithm, and typically a large gap in the complexity and performance between 2 and 2+1 type LambdaMm algorithms leading to what is called poor granularity in terms of both complexity and performance, where a desired performance is not achievable due to computational resources not being available.</p>
<p>The present invention aims to overcome or at least alleviate one or more of the aforementioned problems.</p>
<p>The invention in various aspects is defined in the appended claims.</p>
<p>The invention in one aspect relates to decreasing the complexity of the decoding process, by focusing on the check node complexity, but maintaining near Shannon capacity performance. An iterative decoding process is realised by identifying and prioritising the most contributing bit nodes, and selecting more computational resources to be used to calculate the check node messages, i.e. representing its belief, reliability, probability or loglikelihood ratios thereof, as compared with the remaining bit nodes, which are also used in the check node update process. It has been realised that this gives a lower computational complexity suboptimal decoding process with an excellent tradeoff between complexity and performance, and granularity thereof, as compared with the previous suboptimal decoding processes.</p>
<p>In one aspect, the present invention provides a method for iteratively decoding lowdensity paritycheck (LDPC) codes. The method includes iteratively updating a plurality of bit nodes, and iteratively updating a plurality of check nodes.</p>
<p>For each bit node there is a set of check nodes and for each check node there is a set of bit nodes, and for each bit node, a set of check node messages used for updating that bit node, and for each check node, a set of bit node messages are used for updating that check node. In each iteration of the iterative decoding process, each check node update for each check node further includes identifying a first subset of bit node messages from the set of bit node messages that used to update each check node, and selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages. The method fluther includes selecting a second subset of bit node messages, excluding the first subset of bit node messages, used to update each check node and selecting a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p>
<p>In a preferred embodiment of the invention, the method includes the first subset of bit node messages that include a set of most contributing bit node messages, or the smallest bit node messages, or least reliable bit node messages.</p>
<p>In a preferred embodiment of the invention, the first algorithm is a Log LikelihoodRatio Belief Propagation algorithm and the second algorithm is the MmSum algorithm. In another preferred embodiment of the invention, the second algorithm is a LambdaMm algorithm. In a further embodiment of the invention, the first algorithm is the LambdaMm algorithm and the second algorithm is the MmSum algorithm, or viceversa.</p>
<p>Aspects of the present invention provide the advantage of allowing for the selection of more computational resources, i.e. more complexity, to be allocated by using a more computationally demanding algorithm for using the most contributing bit nodes in the check node update process, but not discarding the contributions of the remaining bit nodes by allocating additional computational resources; and less complexity, using a less computationally demanding algorithm to update the check node messages using these bit nodes (and their messages) during each iteration of the check node update process within an LDPC iterative decoder.</p>
<p>Aspects of the present invention provide the advantage of allowing gradations of computational complexity to be allocated during the iterative decoding process, allowing the selection of the first and second algorithms, (and other algorithms), to be used for, or if necessary adapted during, the iterative decoding process. This provides an iterative decoding process that is able to adapt to the computational requirements of the hardware/software that the LDPC decoder requires for a given communications link for example, or for a given performance or quality of service.</p>
<p>In another aspect of the invention, an apparatus is provided for use in iteratively decoding LDPC codes adapted to implement the afore said method and embodiments thereof. In a further aspect, a computer program is provided for iteratively decoding LDPC codes, which when executed, implements the aforesaid method and embodiments thereof.</p>
<p>These and other aspects, preferred features and embodiments will be described further, by way of example only, with reference to the accompanying drawings in which: Figure 1 illustrates an example of an LDPC decoder/encoder for use in a communications system.</p>
<p>Figure 2 illustrates an example of the relationship between the parity check matrix and the corresponding Tanner Graph.</p>
<p>Figure 3a illustrates the update of bit node messages for bit node i using check node messages sent from the set of check nodes M(i) to calculate each code bit of the codeword estimate.</p>
<p>Figure 3b illustrates the update of the check node messages using bit node messages sent to check nodej from the set of bit nodes J's/(j).</p>
<p>Figure 4 illustrates a plot of the nonlinear function c1(x) for 0 <x = 6.</p>
<p>Figure 5 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the Belief Propagation and MmSum algorithms.</p>
<p>Figure 6a illustrates simulation results depicting the biterrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 5.</p>
<p>Figure 6b illustrates simulation results depicting the frameerrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 5.</p>
<p>Figure 7 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the Belief Propagation and LambdaMm algorithms.</p>
<p>Figure 8a illustrates simulation results depicting the biterrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 7.</p>
<p>Figure 8b illustrates simulation results depicting the frameerrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 7.</p>
<p>Figure 9 illustrates an example of a preferred embodiment of the iterative decoding process using a check node update process based on the LambdaMm and Mm Sum algorithms.</p>
<p>Figure 1 Oa illustrates simulation results depicting the biterrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 9.</p>
<p>Figure 1 Ob illustrates simulation results depicting the frameerrorrate performance versus signaltonoise ratio (Eb/No) for the embodiment of Figure 9.</p>
<p>Specific Description of the Preferred Embodiments</p>
<p>Firstly, a description is given of a simplified communication system for use with the LDPC decoder. This is followed by a description of the use of bipartite Tanner graphs in iterative decoding of LPDC codes. Thereafter, iterative decoding algorithms such as the belief propagation, MmSum and LambdaMm algorithms are described. The preferred embodiments of the iterative LPDC decoder are described where appropriate in relation to, but not limited to, the use of these three algorithms.</p>
<p>An illustration of a communications system 100 for use with a lowdensity paritycheck (LDPC) decoder is shown in Figure 1. The communications system 100 includes a transmitter unit 104 that communicates a coded signal to a receiver unit 112. A brief overview of the communication system 100 is now given followed by a</p>
<p>detailed description of its components.</p>
<p>The transmitter unit 104 receives information bits from a data source 102, and has an LDPC code encoder 106 and a modulationltransmit unit 108. The coded signal is transmitted to the receiver unit 112 via a communications channel 110. The receiver unit 112 has the corresponding components necessary for receiving the coded signal; they are, a demodulator/matched filter unit 114, and an iterative LDPC decoder unit 116, where an estimate of the transmitted coded information is forwarded to a data sink 122.</p>
<p>The data source 102 generates information bits grouped into a block of k information bits denoted by ji = [p1,. . .Pk] for encoding at the LDPC encoder 106.</p>
<p>The LDPC encoder 106 has a code rate of R( and encodes the block of k information bits ji into a block of n code bits i.e. into a codeword I = [x1,.. .,;].</p>
<p>In general the codeword I is generated by I = p G, where G is the k by n generator matrix that defines the encoder 106. The codeword I is in the code set C, which is the set of all codewords of the encoder 106 defined by G. All elements and operations performed on P=[ulI,...,IUk], I=[x1,...,x], and G are defined, in this example, on Galois Field GF(2). The use of higher dimensions, e.g. GF( qr) where q and r are integers is possible. For example, codes with symbols from GF( 2"') rn> 0 are most widely used.</p>
<p>The codeword I for is modulated for transmission over the communications channel 110 as the ndimensional transmitted signal F. For simplicity the communications channel 110 is assumed to be an additive white gaussian noise channel (AWGN). In this channel, the noise is a random variable having a normal distribution with 0 mean and variance of a2 N0 / 2, where N0 / 2 is the power spectral density. Any other communications channel may be applied such as multipath channels, etc. The transmitted signal F is corrupted by the AWGN giving the ndimensional received signal = F + n, where n is the ndimensional vector of AWGN samples. The receiver unit 112 receives the noisy signal # and demodulates this signal by matched filtering 114 giving the n dimensional codeword The LDPC decoder 116 has two units, the iterative decoder unit 118, which produces an estimated codeword I = [z1,.. .,;], and the error detection and decode unit 120, which checks then decodes the estimated codeword = [z,. . ., z,,] into an estimate of the k information bits 11 = ,. . . , iü] sent by the transmitter. The information bits p1 are forwarded to the data sink 122.</p>
<p>As mentioned previously, the generator matrix G has a dual matrix H, which is the m = n k by n parity check matrix where G H" =0. The parity check matrix H defines a set of m parity check equations that are useful in the decoding process. LDPC codes are defined by an m by n sparse parity check matrix in which there is a lower density of ones compared to zeros. In general, the goal is to find a sparse generator matrix and a sparse parity check matrix for encoding and decoding.</p>
<p>Once the iterative decoding unit 118 has estimated the codeword L the decoder unit 120 determines if is a valid codeword in the code set by computing *H". 1f isavalidcodewordthen i.HT=,Ti.(G.H1')=0 andhence canbe decoded into k information bits p1, otherwise an error is detected. An example of a parity check matrix H 200 (this is an example only using a random matrix H) is shown in Figure 2. The first row corresponds to the first parity check equation, which can be written as p1 = H11 e H13 where e is an xor operation or modulo2 addition in GF(2). If = [1,0,1,11, then p1 = 1, p2 = 1, and p3 = 0 and an error would be detected, otherwise the codeword i is detected. Depending on the number of errors the decoder unit 120 can correct (which is a function of the coding type and rate used), the codeword I may be decoded correctly into p1= [AüI,. . . , or an error may be detected.</p>
<p>It is the task of the iterative decoder unit 118 to determine/find the closest or the most likely codeword that was transmitted for each received coded signal f and. The iterative decoding structures used in the LDPC decoder 116 are outlined in the following description. However, to understand the concept of iterative decoding, an overview of the bipartite Tanner graph that is used in the following iterative decoding algorithms is given.</p>
<p>Referring now to Figure 2, an example of a parity check matrix H 200 with its corresponding bipartite Tanner graph 202 is shown. The iterative decoder unit 118 uses soft decision iterative decoding algorithms that work on the concept of message passing (as described previously) between the bit nodes 204 (204a204d) and check nodes 206 (206a206c) over the edges of the bipartite Tanner graph 202.</p>
<p>In a preferred embodiment, the parity check matrix H 200 is stored in memory at the decoder 116. The bipartite Tanner Graph 202 described in Figure 2 is a representation for describing the following iterative decoding algorithms.</p>
<p>Bipartite Tanner graphs 202 consist of a plurality of bit nodes 204 (n bit nodes) and a plurality of parity check nodes 206 (m = nk check nodes). These graphs 202 are defined by the parity check matrix H 200. Each bit node 204 (e.g. bit node 204c) is connected via graph edges (e.g. edges 208) to a set of check nodes (e.g. 206a and 206c) defined by M(i) = { j H. = i} for bit node i, where 1 = j = m and m is the number of check nodes 206. Similarly, each check node (e.g. check node 206b) is connected via graph edges (e.g. edges 210) to a set of bit nodes (e.g. 204b and 204d) defined by Af(j) = { 1 H,, = i} for check node 1, where I = I = n, and n is the number of bit nodes 204.</p>
<p>For example, Figure 2 illustrates that there are graph edges (e.g. 208 and 210) when there is a 1 in H, for example, the third column of H corresponds to all the edges that connect bit node 204c to the corresponding check nodes, i.e. check node 206a and 206d as there is a 1 in H3,1, where H, denotes the element of column I and rowj. The number of 1 s in a column of H, or the column weight, indicates the number of graph edges and check nodes connected to that bit node. The number of 1 s in a row of H, or the row weight, indicates the number of graph edges and bit nodes connected to that check node. In addition, the set of bit nodes.Af(2) for check node 206b include bit nodes 204b and 204d and the set of check nodes M(3) for bit node 204c include the check nodes 206a and 206c. The edges 208, 210 of the bipartite Tanner graph 202 are used in the iterative decoding algorithms to pass (send) messages from the bit nodes 204 to the check nodes 206 and vice versa, as has been previously described and will be seen in more detail in the following</p>
<p>description.</p>
<p>The messages that are passed between check nodes 206 and bit nodes 204, and viceversa, are stored as variables in a memory, for example a set of messages for each bit node i that are passed to a set of check nodes can be stored as variables in an array. The use of the terms, "pass" or "message passing" and the like, are for illustrative purposes to describe the iterative decoding algorithms that follow with reference to bipartite Tanner graphs.</p>
<p>BELIEF PROPAGATION ALGORITHM</p>
<p>In the iterative BP algorithm, each bit node i passes a set of bit nodemessages to the set of check nodes M(i), which are used in each check node to update the check node messages. Similarly, each check nodej passes a set of check node messages to the set of bit nodes.IV(j). In each iteration of this algorithm, there are two main processes: the bit node update process illustrated in Figure 3a, which updates the bit node messages, and the check node update process illustrated in Figure 3b, which updates the check node messages. These messages are used to estimate the code bits and represent the probability or loglikelihood ratio thereof, reliability, or the belief, that the bit and check nodes have for the estimate of each code bit of the received codeword. In the loglikelihood ratio BP algorithm (LLRBP) the messages that are passed between bit nodes and check nodes are based on probabilities, i.e. loglikelihood ratios thereof.</p>
<p>The Iterative LLRBP algorithm is now described with reference to Figures 3aand3b.</p>
<p>Bit Node Update of LLRBP Referring to Figure 3a, the bit node update process 300 for updating the bit node message Tl in each iteration is illustrated for bit node 1 304 and the set of check nodes M(i), where is passed from bit node i 304 to check nodej 306. Bit node i 304 receives a priori information related to the channel (e.g. from channel measurements). For example, in an AWGN channel, bit node i 304 receives the log likelihood ratio of the ith bit of the received code word estimate = [y1,.. . , y,,. . ., y], given by what is called the intrinsic information I, = y,, where o.2 = N0 / 2 is the variance of the AWGN and N0 / 2 is the power spectral density.</p>
<p>In the first iteration, the intrinsic information is used to initialise the LLRBP algorithm, where all bit node messages, 7 for i J'1(j) and 1 = j = m are passed from the set of bit nodes Af(j) (not shown) to check node] 306 are set to I, y,,.</p>
<p>All check node messages, E, for] M(i) and 1 = i = n, that are passed from the set of check nodes M(i) to bit node i 304 are set to zero.</p>
<p>In subsequent iterations, the message 7 is updated by summing, with the intrinsic information, the set of check node messages, E. , for] E M(i) I] and 1 = i = n that are passed from the set of check nodes M(i) to bit node i 304 but excluding the check node message E1 passed from check node] 306 to bit node i 304 (not shown). The summation is given by: E, (1) j M(i)\j In each iteration, a posteriori probabilities (APP) or soft decisions 1 for bit node i 304 can be calculated by summing the intrinsic information I, and the set of check node messages, E1. for] E M(i), that are passed from the set of check nodes M(i) to bit node i 304. However, generally, these soft decisions could be calculated in the final iteration. The soft decisions are given by the following equation: T,=11+ E1, (2) jeM(i) A hard decision z. of 7; is formed for each of the i bit nodes producing an estimate of the transmitted code word. The parity check equations * H" are applied, to check if the estimated code word is in error. If not, the codeword is decoded accordingly. Otherwise, the error may be corrected depending on the error correction capabilities of the code.</p>
<p>The bit node update process requires few computational resources compared with the check node update process, which is now described.</p>
<p>Check Node Update of LLRBP Referring to Figure 3b, the check node update process 302 is illustrated for updating check node message E, 1 in each iteration for check node] 306 and the set of bit nodes J/(j), where check node message E, is passed from check node] 306 to bit node i 304. As the bit and check node messages are represented by loglikelihood ratios, it is the check node update process that contributes the greatest complexity to the computation of the LLRBP algorithm. The computational burden comes from the use of the nonlinear function, (1)0, used in each check node message update, which is given by: 1 (x) = log [tanh (IJ1 (3) An example of the shape of the function 1)() is shown in Figure 4. As can be seen, small values of x produce large values of 1)(x).</p>
<p>In an attempt to simplify matters, the sign of E, and the magnitude of E1 are computed separately. However, it is the magnitude of E1 that contributes towards the overall complexity of the check node update process.</p>
<p>Each check node message E1 is updated by summing cL(7) over the set of bit node messages passed from the set of bit nodes Af(j) to check node] 306, but excluding the bit node message I from bit node 1304 to check node] 306. Finally, E,1 is updated by reapplying t) to the nonlinear summation of D(1). The magnitude of E,1 is given by: (4) iEI(j)\i where the sign processing, or Sign( E...) is given by: Sign (E1) = fl Sign (r,.) (5) iEJV(j)\I The iterations of the LLRBP algorithm continue until either a predetermined or maximum number of iterations are achieved, or until the algorithm has converged, that is the bit node messages have converged.</p>
<p>The complexity of the decoding process is primarily due to the nonlinear function used on the bit node messages (which are loglikelihood ratios) in the check node update process.</p>
<p>Instead of the summation carried out in equations (1), (2) or (4) of the bit node update and check node update processes, the simplification in the following section may be used.</p>
<p>Bit Node Update of Simplified LLRBP The bit node update process of the LLRBP algorithm can be simplified by computing the soft decision T first. Referring to Figure 3a, the bit node message, 7, (the extrinsic information), that is passed from bit node i 304 to check node j 306, is updated by: E1, JeM(i)\j =11+1 E (6) jEM(i) ) =1E', This slightly reduces the complexity of the bit node update process as only the sum E.. is required to be computed once per bit node i 304, for 1 = i = n.</p>
<p>jEM(i) Check Node Update of simplified BP Referring to Figure 3b, the check node update process of the LLRBP algorithm is simplified by computing the summation of all nonlinear functions over I E Af(j). The processing of the sign(E1) is kept the same, however, the magnitude E1, is simplified by the following: s,= (7) [ieAr(i)\e 1)A)] (8) IE.1V(J) = CD[S, _ci(1)] The nonlinear function C1 (.) is performed in a full summation once per check node j 306. However, although this goes some way to reduce the complexity of the LLRBP algorithm, it is still computationally intensive and difficult to implement on hardware having limited computing capabilities, for example mobile phones, TV receivers, or any other wireless transceiver.</p>
<p>MINSUM ALGORITHM</p>
<p>In order to reduce the complexity of the check node update process, the MmSum algorithm has been proposed in the literature as a suboptimal algorithm that can be used in the soft decision iterative decoding of LDPC codes instead of the LLRBP algorithm.</p>
<p>The bit node update process in the MmSum algorithm is the same as that of the LLRBP algorithm. It is the check node update process that is greatly simplified at the cost of decreased performance.</p>
<p>For each check node j, the check node update process can be simplified by observing that small values of result in large values of t(T,,), see Figure 4.</p>
<p>Hence small values of contribute more to the summation of IEN(j)\i equations (4) or (8) of the LLRBP algorithm than larger values of T,,. Small values of T, represent a lower reliability bit node message and these are more important to the final decision than larger values of T,,. The complexity of the call to the function D(T,1) can be avoided by considering the following approximation: (10) iEAI(j)\i where n0 = Arg mm {T}. Substituting equation (10) into equation (4) or (8) and iEJ\f(j)\i exploiting the property [ci(x)] = x, then the update for is given by mm. (II) " i'N(j)\i. ,,j) n0 In summary, for each check node j the update of the check node message passed to bit node i, the smallest magnitude bit node message IJ is selected from the set of bit nodes J\f(j) \ i that exclude the bit node i. Although the MmSum algorithm achieves a dramatic reduction in complexity, it will be seen shortly that this also results in a dramatic reduction in bit and frame error rate performance as compared with the LLRBP algorithm.</p>
<p>HYBRID LDPC DECODER (SET #1) It has been realised that valuable information is lost by using the MmSum algorithm due to the discarded extrinsic information 7,,Vi E Jf(j)\n0 for check node].</p>
<p>Instead, a preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising (in terms of computational resources or complexity) the most contributing bit node n0 (where n0 = Arg mm {r. } to update the extrinsic check node messages E1,, being passed iEJV(j) to that bit node n0, over the less contributory bit nodes that update the remaining extrinsic check node messages E.,Vi E Jf(j)in0.</p>
<p>In general, this is achieved by: * Identifying, for each check node j, 1 = j = m, the least reliable bit node by finding the smallest bit node message TJ,Vi 11(j) among the set of bit nodes 1f(j) ,that is n0 = argmin{7}.</p>
<p>ieAr(j) * Selecting a first algorithm for use in the check node update process by, for example, allocating computational resources to calculate the check node message E,, that is passed from the check node] to bit node n0.</p>
<p>* Identifying the remaining set of bit nodes Vi E Al (]) / n0.</p>
<p>* Selecting a second algorithm for use in the check node update process, for example allocating further computational resources, for the remaining bit nodes Vi AI(j)1n0 to calculate the remaining check node messages E,1,VJ E M(i).</p>
<p>The result is a check node update process that selects a first algorithm, where the allocation of more computational resources, complexity, and thus more accuracy for estimating the extrinsic check node message sent from a given check node to the bit node generating less reliable bit node messages, (also seen as generating the smallest or most contributing bit node messages), while still taking into account the contribution from other bit nodes in the check node update process by selection of a second algorithm.</p>
<p>Referring to Figure 5, a preferred embodiment of the invention for updating the check node message 500 is illustrated that uses the LLRBP update rules, or the LLRBP algorithm, as the first algorithm to update the check node passing the extrinsic check node message to the less reliable bit node 502. It uses the MmSum update rules, or the MmSum algorithm, as the second algorithm to update the check nodes passing extrinsic check node messages to the most reliable bit nodes (or remaining bit nodes).</p>
<p>This results in the following check node update process, for each check node j, 1 = j = m: * Identifying: a. the least reliable or smallest bit node 502 among A1(j): n0 =argmmn{i} IEAI(j) b. the remaining set of bit nodes Vi E Jf(j)in0.</p>
<p>* Selecting a first and second algorithm for use in the check node update process for check node j 306 to update the extrinsic check node messages as follows: a. For the check node message being passed to bit node n0 502, use as the first algorithm the LLRBP algorithm (either using equations (4) or (8)) i.e.: n0: EJ,IJ = b. For check node messages being passed to the remaining bit nodes use as the second algorithm the MmSum algorithm: Vie /(j)\ no: EJ1 = in {7} = T0J Referring now to Figure 6a and 6b, a comparison of the error performance of this preferred embodiment is made with respect to the reference LLRBP and MmSum algorithms. The communication system model uses the IEEE 802.16e standard for communication and the LDPC code used is a rate R(. = LDPC code having a frame length of Zf=96 bits. The simulated bit error rate (BER) vs SNR (Eb / N0) for this system is shown in Figure 6a and the simulated frame error rate (FER) vs SNR (Eb / N0) for this system is shown in Figure 6b.</p>
<p>Referring to Figure 6a, the performance gain of the preferred embodiment is 0.1dB at a BER=le4 over that of the MmSum algorithm. This improved performance gain is exceptional considering the only slight increase in computational complexity.</p>
<p>The computational complexity of the reference algorithms and this preferred embodiment is shown in Table 1. The complexity of the preferred embodiment is given in the row labelled SET#l and is shown to be less than the LLRBP algorithm and closer to that of the MmSum algorithm.</p>
<p>The complexity is given in terms of the degree of the check and bit nodes (also called variable nodes) denoted d and d respectively. The degree of a check node is the row weight of the parity check matrix H, which represents the number of bit nodes that pass bit node messages to that check node. Similarly, the degree of a bit (or variable node) is the column weight of the parity check matrix H, which is the number of check node messages that are passed to a bit node.</p>
<p>Referring to Figure 6b, the performance degradation of the preferred embodiment is only 0.2dB compared with that of the LLRBP algorithm.</p>
<p>The bit node update process in this preferred embodiment is the same as that of the LLRBP algorithm. It is clear that this preferred embodiment allows for low complexity decoding of LDPC codes with a performance gain increase of 0.1 dB with respect to the MmSum algorithm, while maintaining a complexity close to it.</p>
<p>sinp NonLinear N u]]icathn ______________ opemthn ______________ ______________ VariabinNodeMessages 2 d m*(d +i) T enththre D ecoding LLRBP 5*d2 2*d nSum 5*d4 LambdaN in +2(d++(3d2) 2.2+1 ___________________________________ 2. 2) ________________ _________________ A_Min* 5*d.2 d+2 SET#1 5*d4 d SET #2  2.(d+!+(4. 3) d + 2 + 1 ___________________________________ 2 2) ________________ _________________ SET#3 +2.d++(3.2) 2*2 _________________ _________________ 2 2) ________________ _________________ 5. d 4 (a]pha,O) 2 Check Node ______________ _________________ _____________ ______________ Messages CorrectedMin5d (1,bete) S urn 5* d (a]pha,belrn) 2 A. + 1 2*2+I (aipha,O) 2. 2) _______________ _______________ Compensated 22 I 2.2 1 (1,beta) +2.1 ±Ii3d LambdaMm 2 1\ 2) ___________ ___________ 22 / 7\ (ahaete) +2Id I+3 d. 2 + 1 22+I ______________ ______________ 2 I. 2) ______________ ______________ Fiarinni (2_1 + 6).d 2 cr heoretica]) ____________ ________________ ____________ ____________ 27*d2 Energy F1arbn2 ______________ __________________ ______________ ______________ (Jrnpinmentathn) 8*d2 Cyc]es TABLE 1: Complexity of various embodiments and Iterative Decoders.</p>
<p>LAMBDAMIN ALGORITHM</p>
<p>The MmSum algorithm sacrifices performance with complexity, whereas the LambdaMm algorithm attempts to attain the performance of the LLRBP algorithm through a higher complexity by using only a predetermined number, 2 >1, of bit node messages passed from the set of bit nodes JV(j) to check node j, during the check node update process.</p>
<p>The bit node update process in the LambdaMm algorithm is the same as that of the BP algorithm.</p>
<p>As with the MmSum algorithm, for each check node j, the check node update process is simplified by observing that small values of result in large values of cD(T,,), as seen in Figure 4. Hence small values of T,, contribute more to the summation ti (r, / ) in equations (4) or (8) of the BP algorithm than larger values of T,,.</p>
<p>As opposed to the MmSum algorithm, the LambdaMm algorithm updates the extrinsic check node messages passed from the check nodes to bit nodes, by relying on just the 2 > 1 least reliable bit nodes, and hence the most contributing bit nodes are used within the aggregated sum of either equations (4) or (8).</p>
<p>In particular, the check node update process of check node j, 1 = j = m,is given as follows: * Identify a set of bit nodes, i.e. 2 bit nodes, that pass bit node messages to check node] by: iV (j) = E J\/(j)/2 lowest * The check node messages for check node j is updated by only using the.N (]) bit node messages: HYBRID LDPC DECODER (SET #2) It has been realised that valuable information is still lost by using the LambdaMm algorithm due to the discarded extrinsic information 1,,Vi E.A1(j)\ .AIA (j) that is not used in the check node update process for check node m.</p>
<p>Instead, a preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising (in terms of computational resources or complexity), for each check node j, the most contributing bit node n0 (where n0 = Arg mm {r / }) to update the extrinsic check ieJ1(j) node message being passed to bit node n0, over the less contributory bit nodes that update the remaining extrinsic check node messages E1,Vi E Jf(j)/n0.</p>
<p>The result is the allocation of more computational resources, complexity, and thus more accuracy to the extrinsic check node message sent by the check node to the bit node that generates less reliable edge messages, while still taking into account the contribution from other bit nodes. This can result in either similar performance with less computational complexity or improved performance with slightly more computational complexity.</p>
<p>Referring to Figure 7, an illustration of a preferred embodiment of the invention for the check node update process 700 that uses the LLRBP algorithm to update the check nodes 306 passing extrinsic check node messages to the least reliable bit node 502, and then use the LambdaMm algorithm to update the check nodes 306 passing extrinsic check node messages to a subset of the remaining bit nodes Al2 (j), in this preferred embodiment, the X least reliable remaining bit nodes are used. However, the preferred embodiment is not limited to using only the remaining least reliable bit nodes, other selections of the remaining bit nodes are possible.</p>
<p>This preferred embodiment results in the following check node update process for each check nodej 306: * Identify: a. The least reliable bit node 502 (bit node with smallest bit node message) among Jf(j): n0 =argmin{T,,} iEJf(j) b. The (21) least reliable bit nodes among Jf(j) \ n0: Af, (j) {j E Af(j)2 iowestT,j} * In the check node update process, for check node j 306, update the extrinsic check node messages as follows by selecting a first and second algorithm: a. For the check node message being passed to bit node n0 502 use LLRBP algorithm as the first algorithm: b. For check node messages being passed to the remaining bit nodes 304 use LambdaMm algorithm as the second algorithm: ViE Af(j)\n0 (ki)] ie (j)V Referring now to Figures 8a and 8b, a comparison of the error performance is made with respect to the reference LambdaMm and MmSum algorithms and this preferred embodiment, which is labelled as SET #2. The communication system model uses the IEEE 802.16e standard and an LDPC encoder/decoder is used having an LDPC code of rate R. = with a frame length of Zf=96 bits. The LambdaMm algorithm is shown for X=3 and 4. However, the preferred embodiment uses 2=3.</p>
<p>The simulated bit error rate (BER) vs SNR (Eb / N0) for this system is shown in Figure 8a and the simulated frame error rate (FER) vs SNR (Eb / N0) for this system is shown in Figure 8b.</p>
<p>Referring to Figure 8a, the performance gain of the preferred embodiment is at least 0.3dB at a BER=le4 over that of the MmSum algorithm in fact, the performance is the same as that of LambdaMm algorithms. For lower BER<le4, the performance gain is actually between both the LambdaMm algorithms with X=3 and 4. The advantage is that the preferred embodiment provides an intermediate solution both in terms of performance and complexity as seen from the complexity analysis in Table 1, see row labelled SET #2.</p>
<p>The computational complexity of the reference algorithms and this preferred embodiment is shown in Table 1. The complexity of the preferred embodiment is given in the row labelled SET#2. This is shown to be less than the complexity of the BP algorithm and closer in complexity to that of the LambdaMm algorithms. For the same 2 used in both the preferred embodiment and the LambdaMm algorithm, there is in fact less complexity in the number of simple operations, while only slightly more complexity in the calls to cD (.). However, as seen in the BER and FER shown in Figures 8a and 8b, the preferred embodiment outperforms the same 2 LambdaMm algorithm for a given BER and FER.</p>
<p>The bit node update process in this preferred embodiment is the same as that of the BP algorithm.</p>
<p>It is clear, that this preferred embodiment allows for lower complexity decoding of LDPC codes while having a performance close to the LLRBP algorithm. Moreover, there is more granularity in terms of complexity and performance. That is, there is a reduced gap in the performance between 2 mm and (2 +1)mm of the preferred embodiment and a reduced increase in complexity, as opposed to the granularity in terms of complexity and performance between the 2mm and (A. +1)mm LambdaMm algorithms.</p>
<p>This allows for the fine tuning of the complexity or gradations of computational complexity to be allocated before or during the iterative decoding process, allowing the selection of the first and second algorithms, (and other algorithms), to be used for, or if necessary adapted during, the iterative decoding process. This provides an iterative decoding process that is able to adapt to the computational requirements of the hardware/software that the LDPC decoder requires for a given communications link/channel for example, or for a given performance or quality of service.</p>
<p>Furthermore, the LambdaMm algorithm for the same as in the preferred embodiment requires an increased amount of storage/memory.</p>
<p>HYBRID LDPC DECODER (SET #3) A further preferred embodiment of the invention updates the check nodes, in each iteration, by exploiting the contributions from bit nodes by prioritising, in terms of computational resources or complexity, the set of most contributing bit nodes Af (j) = {i e A1(j)/2 iowestT,1}, to update the extrinsic check node messages E11 c1 being passed to those bit nodes over the less contributory bit nodes to update the remaining extrinsic check node messages E11,V1E Jf(j)\JVA(/).</p>
<p>The result is the allocation of more computational resources, complexity, and thus more accuracy to the extrinsic check node message sent by the check node to the bit node that generates less reliable bit node messages (i.e. the smallest magnitude bit node messages). While still taking into account the contribution from other bit nodes by selection of a second algorithm. This can result in intermediate performance with reduced computational complexity compared with the LLRBP and LambdaMm algorithms.</p>
<p>Referring to Figure 9, an illustration is shown of a preferred embodiment of the invention in which check node update process 900 uses, as the first algorithm, the LambdaMm algorithm to update the check nodes 306 passing extrinsic check node messages to the least reliable bit nodes 304, which are the 2 least reliable bit nodes.</p>
<p>In addition, the preferred embodiment uses, as the second algorithm, the MmSum algorithm to update the check nodes 306 passing extrinsic check node messages to the remaining bit nodes.</p>
<p>This results in the following check node update process for each check node j, 1 =j =m: * Identify: a. The 2 least reliable bit nodes (bit nodes passing the smallest bit node messages to check nodej 306) among.,V(j): 2 (i) = E Af(f)/2 lowest b. The least reliable bit node among.A/( j)/A12 (j): n0= Argmin {i} i'eJ1(j)\i%I1(j) * In the check node update process, update the extrinsic check node messages using a first and second algorithm as follows: a. For check nodes messages being passed to the set of bit nodes JV (j) use the LambdaMm algorithm as the first algorithm: ViEJV2(j): =[ I V (j)\i b. For check node messages being passed to the remaining bit nodes use the MmSum algorithm as the second algorithm: ViE J1(j) \ Al2 (i) n = : = = Referring now to Figures 1 Oa and lOb, a comparison of the error performance is made with respect to the reference LambdaMm and MmSum algorithms and this preferred embodiment, which is labelled SET #3. The communication system model uses the IEEE 802.16e standard and an LDPC encoder/decoder having an LDPC code of rate R = with a frame length of Zf=96 bits. The LambdaMm algorithm is shown for 2 =3 and 4. The preferred embodiment uses 2 =3.</p>
<p>The simulated BER vs SNR (Eb / N0) for this system is shown in Figure 1 Oa and the simulated FER vs SNR (Eb / N0) for this system is shown in Figure lOb.</p>
<p>Referring to Figure lOa, the performance gain of the preferred embodiment over the MmSum algorithm is 0.15db for a BER of I e4. The performance is between that of the MmSum algorithm and the LambdaMm algorithms. In addition, as the SNR increases, the performance of the preferred embodiment approaches that of the LambdaMm algorithm for 2=3. Similar FER performance is attained as can be seen in Figure lOb. The preferred embodiment is an intermediate solution between the full LambdaMm and MmSum algorithms both in term of performance and complexity (see Table 1, for row labelled SET #3).</p>
<p>Hence, the preferred embodiment provides a lowcomplexity iterative LDPC decoder that gives a good tradeoff between performance and complexity, due to its complexity and performance when compared with the existing LambdaMm and MmSum algorithms.</p>
<p>It will be clear to the skilled person that other iterative algorithms may be combined with any of the preferred embodiments or combinations thereof. In addition, any combinations of all three LLRBP, LambdaMm and MmSum algorithms can be applied to the present invention, where any of these algorithms use the least reliable bit nodes while the remaining algorithms operate on the remaining subsets of the most reliable bit nodes.</p>
<p>It will be clear to the skilled person that, other than the modulation and RF components, the blocks making up the embodiment at the transmitter and receiver units comprise suitably programmed microprocessors, digital signal processor devices (DSPs), field programmable gate arrays (FPGAs) or application specific integrated circuits (ASIC5) executing signal processing. Separate dedicated hardware devices or software routines may be used for the LDPC encoder and decoder operations (specifically the iterative algorithms applied in the soft decision iterative LDPC decoder for estimating the received codeword) in transceiver, transmitter or receiver units.</p>
<p>The term "messages" can be variables (or temporary variables) that are stored in memory in such a fashion to represent the value of the message and the relationship between the messages passed from check nodes to bit nodes, and viceversa. The use of the terms, "pass" or "message passing" and the like, can be used to describe the accessing and performing operations on these variables stored in memory (and the like).</p>
<p>It will be apparent from the foregoing that many other embodiments or variants of the above are possible. The present invention extends to any and all such variants, and to any novel subject matter or combination thereof disclosed in the foregoing.</p>
Claims (1)
 <p>CLAIMS</p><p>1. A method for iteratively decoding low density parity check codes comprising, for each iteration, the steps of: updating a plurality of bit nodes; and updating a plurality of check nodes; wherein for each bit node a set of check node messages are used for updating that bit node, and for each check node a set of bit node messages are used for updating said each check node; wherein for each check node update the method further includes the steps of: identifying a first subset of bit node messages from the set of bit node messages for use in updating each check node; selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and selecting a second algorithm for updating a second subset of check node messages corresponding to the second subset of bit node messages, wherein the second algorithm further includes the steps of: identifying the most contributing bit node message from the second subset of bit node messages; and updating the second subset of check node messages with the most contributing bit node message.</p><p>2. The method of claim 1 wherein the first subset of bit node messages comprise a set of most contributing bit node messages.</p><p>3. The method of claims 1 or 2, wherein the first algorithm includes the steps of: forming a summation over a combination of the first subset of bit node messages; and updating the first subset of check node messages, where for each check node message in the first subset of check node messages the said each check node message is updated by the summation with a contribution of the bit node message corresponds to the each check node message removed; 4. The method of claim 3, wherein the combination of first subset of bit node messages is formed by a function having as an operand each bit node message in the first subset of bit node messages; 5. The method of claim 4, wherein for each check node message in the first subset of check node messages the said each check node message is updated by the function having as an operand the summation with the contribution of the respective bit node message corresponding to each said check node message in the first subset of bit node messages removed.</p><p>6. The method of claims 4 or 5, wherein the function is a nonlinear function defined by a natural logarithm of a hyperbolic tangent of the operand divided by two.</p><p>7. The method of any of claims 1 to 6 wherein the first algorithm is a LogLikelihoodRatio Belief Propagation algorithm.</p><p>8. The method of any of claims 1 to 7, wherein the second algorithm is a MmSum algorithm.</p><p>9. The method of any preceding claim further including the steps of: adjusting the check node update process, by selecting the first and second algorithm in accordance with a required accuracy or LDPC code performance requirement.</p><p>10. An apparatus for use in iteratively decoding LDPC codes adapted to implement the method of any of claims 1 to 9.</p><p>11. A computer program for iteratively decoding LDPC codes, which when executed, implements the method of any of claims I to 9.</p><p>12. A hybrid check node apparatus for use in iteratively decoding low density parity check codes comprising, for each iteration and each check node, means for: identifying a first subset of bit node messages from the set of bit node messages for use in updating each check node; selecting a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and selecting a second algorithm for updating a second subset of check node messages corresponding to the second subset of bit node messages, wherein the second algorithm further includes means for: identifying the most contributing bit node message from the second subset of bit node messages; and updating the second subset of check node messages with the most contributing bit node message.</p><p>13. The hybrid check node apparatus of claim 12 wherein the first subset of bit node messages comprise a set of most contributing bit node messages.</p><p>14. The method of claims 12 or 13, wherein the first algorithm includes means for: forming a summation over a combination of the first subset of bit node messages; and updating the first subset of check node messages, where for each check node message in the first subset of check node messages the said each check node message is updated by the summation with a contribution of the bit node message corresponds to the each check node message removed.</p><p>15. The hybrid check node apparatus claim 14, wherein the first algorithm includes means for combining the first subset of bit node messages with a function having as an operand each bit node message in the first subset of bit node messages.</p><p>16. The hybrid check node apparatus of claim 15, wherein the first algorithm includes means for updating each check node message in the first subset of check node messages, the each check node message being updated by the function having as an operand the summation with the contribution of the respective bit node message corresponding to each said check node message in the first subset of bit node messages removed.</p><p>17. The hybrid check node apparatus of claims 15 or 16, wherein the function is a nonlinear function defined by a natural logarithm of a hyperbolic tangent of the operand divided by two.</p><p>18. The hybrid check node apparatus of any of claims 12 to 17 wherein the first algorithm includes means for implementing a LogLikelihoodRatio Belief Propagation algorithm.</p><p>19. The hybrid check node apparatus any of claims 12 to 18, wherein the second algorithm includes means for implementing a MmSum algorithm.</p><p>20. The hybrid check node apparatus of any of claims 12 to 19 further including means for: adjusting the check node update process, by selecting the first and second algorithm in accordance with a required accuracy or LDPC code performance requirement.</p><p>21. A method of updating check nodes suitable for a process of decoding low density parity check codes, the method comprising the steps of: identifying a first subset of bit node messages from the set of bit node messages for use in updating each check node; using a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and using a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages, wherein the second algorithm further includes the steps of: identifying the most contributing bit node message from the second subset of bit node messages; and updating the second subset of check node messages with the most contributing bit node message.</p><p>22. A method for iteratively decoding low density parity check codes comprising, for each iteration, the steps of: updating a plurality of bit nodes; and updating a plurality of check nodes; wherein for each bit node a set of check node messages are used for updating that bit node, and for each check node a set of bit node messages are used for updating each check node; characterised in that for each check node update the method further includes the steps of: identifying a first subset of bit node messages from the set of bit node messages for use in updating each check node; using a first algorithm for use in updating a first subset of check node messages corresponding to the first subset of bit node messages; identifying a second subset of bit node messages, excluding the first subset of bit node messages, for use in updating each check node; and using a second algorithm for use in updating a second subset of check node messages corresponding to the second subset of bit node messages.</p><p>23. The method of claim 22, wherein the first algorithm is a LogLikelihoodRatio Belief Propagation algorithm.</p><p>24. The method of claim 22 or 23, wherein the second algorithm is a MmSum algorithm.</p>
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

GB0521859A GB2431834A (en)  20051026  20051026  Decoding lowdensity paritycheck codes using subsets of bit node messages and check node messages 
Applications Claiming Priority (3)
Application Number  Priority Date  Filing Date  Title 

GB0521859A GB2431834A (en)  20051026  20051026  Decoding lowdensity paritycheck codes using subsets of bit node messages and check node messages 
US11/586,759 US8006161B2 (en)  20051026  20061026  Apparatus and method for receiving signal in a communication system using a low density parity check code 
KR1020060104732A KR101021465B1 (en)  20051026  20061026  Apparatus and method for receiving signal in a communication system using a low density parity check code 
Publications (2)
Publication Number  Publication Date 

GB0521859D0 GB0521859D0 (en)  20051207 
GB2431834A true GB2431834A (en)  20070502 
Family
ID=35515779
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

GB0521859A Withdrawn GB2431834A (en)  20051026  20051026  Decoding lowdensity paritycheck codes using subsets of bit node messages and check node messages 
Country Status (1)
Country  Link 

GB (1)  GB2431834A (en) 
Citations (2)
Publication number  Priority date  Publication date  Assignee  Title 

US20050154957A1 (en) *  20040112  20050714  Jacobsen Eric A.  Method and apparatus for decoding forward error correction codes 
US20050229087A1 (en) *  20040413  20051013  Sunghwan Kim  Decoding apparatus for lowdensity paritycheck codes using sequential decoding, and method thereof 

2005
 20051026 GB GB0521859A patent/GB2431834A/en not_active Withdrawn
Patent Citations (2)
Publication number  Priority date  Publication date  Assignee  Title 

US20050154957A1 (en) *  20040112  20050714  Jacobsen Eric A.  Method and apparatus for decoding forward error correction codes 
US20050229087A1 (en) *  20040413  20051013  Sunghwan Kim  Decoding apparatus for lowdensity paritycheck codes using sequential decoding, and method thereof 
Also Published As
Publication number  Publication date 

GB0521859D0 (en)  20051207 
Similar Documents
Publication  Publication Date  Title 

Han et al.  Lowfloor decoders for LDPC codes  
Yazdani et al.  On construction of ratecompatible lowdensity paritycheck codes  
US8024641B2 (en)  Structured lowdensity paritycheck (LDPC) code  
CN101159436B (en)  Decoding equipment and method  
US7222284B2 (en)  Lowdensity paritycheck codes for multiple code rates  
US9362956B2 (en)  Method and system for encoding and decoding data using concatenated polar codes  
US20150194987A1 (en)  Method and apparatus for generating hybrid polar code  
US20100042890A1 (en)  Errorfloor mitigation of ldpc codes using targeted bit adjustments  
US7000167B2 (en)  Decoding low density parity check codes  
US20070022354A1 (en)  Method for encoding lowdensity parity check code  
Eleftheriou et al.  Reducedcomplexity decoding algorithm for lowdensity paritycheck codes  
Radosavljevic et al.  Optimized message passing schedules for LDPC decoding  
US8166367B2 (en)  Method and apparatus for encoding and decoding channel in a communication system using lowdensity paritycheck codes  
EP2584708B1 (en)  Method and apparatus for channel decoding in a communication system using punctured LDPC codes  
EP1717959A1 (en)  Method and device for controlling the decoding of a LDPC encoded codeword, in particular for DVBS2 LDPC encoded codewords  
US20050268202A1 (en)  Quasiblock diagonal lowdensity paritycheck code for MIMO systems  
US8484531B1 (en)  Postprocessing decoder of LDPC codes for improved error floors  
Mitchell et al.  Spatially coupled LDPC codes constructed from protographs  
US20070089025A1 (en)  Apparatus and method for encoding/decoding block low density parity check codes having variable coding rate  
KR20090048465A (en)  Messagepassing decoding method with sequencing according to reliability of vicinity  
Zhang et al.  Twodimensional correction for minsum decoding of irregular LDPC codes  
US20060195754A1 (en)  AMP (accelerated message passing) decoder adapted for LDPC (low density parity check) codes  
JP5138221B2 (en)  Method for minsum decoding error correction code  
US8261152B2 (en)  Apparatus and method for channel encoding/decoding in communication system using variablelength LDPC codes  
EP1482643A2 (en)  Method and apparatus for decoding a low density parity check (LDPC) code in a communication system 
Legal Events
Date  Code  Title  Description 

WAP  Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) 