WO2003032498A1 - Evaluating and optimizing error-correcting codes using projective analysis - Google Patents

Evaluating and optimizing error-correcting codes using projective analysis Download PDF

Info

Publication number
WO2003032498A1
WO2003032498A1 PCT/JP2002/010169 JP0210169W WO03032498A1 WO 2003032498 A1 WO2003032498 A1 WO 2003032498A1 JP 0210169 W JP0210169 W JP 0210169W WO 03032498 A1 WO03032498 A1 WO 03032498A1
Authority
WO
WIPO (PCT)
Prior art keywords
projective
error
projected
message
code
Prior art date
Application number
PCT/JP2002/010169
Other languages
French (fr)
Inventor
Jonathan S. Yedida
Erik B. Sudderth
Jean-Philippe Bouchaud
Original Assignee
Mitsubishi Denki Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Denki Kabushiki Kaisha filed Critical Mitsubishi Denki Kabushiki Kaisha
Priority to JP2003535338A priority Critical patent/JP4031432B2/en
Priority to EP02775267A priority patent/EP1433261A1/en
Publication of WO2003032498A1 publication Critical patent/WO2003032498A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1191Codes on graphs other than LDPC codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/01Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes

Definitions

  • the present invention relates generally to the field of error-correcting codes for data storage and data transmission, and more particularly to evaluating and optimizing error-correcting codes.
  • a source 110 intends to transmit a block of k bits, denoted by a vector u 111, to a destination 150.
  • the source 110 adds redundancy bits to the source symbols by passing them through an encoder 120.
  • the output of the encoder is a block of N bits, denoted by a vector x 121.
  • the block of N bits pass through a channel 130, subject to noise 135, where the block is possibly corrupted into another block of N output symbols, denoted by a vector y .
  • the output of the channel is then decoded 140 into a received block denoted by vector v of k bits 141 for the destination 150.
  • the received block 141 will match the transmitted block 111.
  • some decoding failures will sometimes occur.
  • a decoding failure called a block error occurs when at least one bit in the received block disagrees with a bit in the transmitted block.
  • the block error rate is the probability that at least one bit of the transmitted block will be received in error, averaged over the probability distribution of transmitted blocks.
  • a better measure of merit is the bit error rate, which is the probability that any given bit will be received in error.
  • a main objective of evaluating error-correcting codes is to determine their bit-error rates.
  • Error-correcting codes described by sparse generalized parity-check matrices have recently been the subject of intense theoretical interest. These types of codes were first described by R. G. Gallager, in "Low-density parity check codes," Nol.21, Research Monograph Series, MIT Press, 1963, but were not properly appreciated until recently. In the last decade, however, a variety of improved codes defined by sparse generalized parity check matrices have been described, such as turbocodes, irregular low-density parity check (LDPC) codes, Kanter-Saad codes, repeat-accumulate codes, and irregular repeat-accumulate codes. These improved codes have three particularly noteworthy advantages.
  • LDPC irregular low-density parity check
  • the codes can be decoded efficiently using message-passing iterative decoding methods, which are sometimes called "belief propagation” (BP) methods.
  • BP message-passing iterative decoding
  • the performance of these codes can often be theoretically evaluated using a density evolution method, at least in the infinite block-length limit.
  • the density evolution method typically demonstrates that BP decoding correctly recovers all data blocks that have a noise level below some threshold level, and that threshold level is often not far from the Shannon limit.
  • the density evolution method dates back to R. G. Gallager's work, already cited above.
  • the density evolution method was re-introduced in the context of the memory-less BEC by M. Luby, M. Mitzenmacher, M. A. Shokrollahi, D. Spielman, and N. Stemann, in “Practical Loss-Resilient Codes " Proceedings 29 Annual ACM Symposium on the Theory of Computing, 1997, pp. 150-159.
  • the name “density evolution” was actually introduced when that method was generalized to other channels, including the memory-less binary symmetric channel (BSC), by T. Richardson and R. Urbanke in "The Capacity of Low- Density Parity Check Codes Under Message-Passing Decoding " IEEE Trans. Inform.
  • Codes with intermediate block-length are important for many applications. Therefore, there is a need for a practical method to directly evaluate the performance of arbitrary intermediate block- length error-correcting codes as decoded by BP decoding methods. Aside from its utility as a part of a code design method, such a method could be used for code verification.
  • the performance of BP decoding of parity-check codes is currently normally judged by "Monte Carlo" simulations, which randomly generate thousands or millions of noisy blocks.
  • the method can be used when the channel is a binary erasure channel or a binary syimnetric channel. Therefore, those channels, parity-check codes, iterative decoding methods, and the density evolution method are described in greater detail.
  • BEC Binary Erasure Channel
  • BSC Binary Symmetric Channel
  • a binary erasure channel is a binary input channel with two input symbols, 0 and 1, and with three output symbols: 0, 1, and an erasure, which can be represented by a question mark "?.”
  • a bit that passes through the channel will be received correctly with probability 1-x, and will be received as an erasure with probability x.
  • a binary symmetric channel is a binary input channel with two input symbols, 0 and 1, and with two output symbols: 0, and 1. A bit will pass through the channel and be correctly received in its transmitted state with probability 1-x, and will be incorrectly inverted into the other state with probability x.
  • the method should be applicable to a memory-less version of the BEC and BSC.
  • a memory-less channel each bit is erased or inverted independently of every other bit.
  • Many practical channels are memory-less to a good approximation.
  • the memory-less BEC and memory-less BSC are excellent test-beds for evaluating and designing new error-correcting codes, even when they are not always realistic practical models.
  • the probability of erasure for the BEC or the inversion probability for the BSC is identical for every bit, because that is the normal realistic situation.
  • it will be convenient to let the erasure probability or inversion probability depend explicitly on the bit position within the block.
  • the bits in a block are indexed with the letter , and the erasure probability in the BEC, or the inversion probability in the BSC, of the 7th bit is taken to be x, .
  • Linear block binary error-correcting codes can be defined in terms of a parity check matrix.
  • a parity check matrix A the columns represent transmitted variable bits, and the rows define linear constraints or checks between the variable bits. More specifically, the matrix A defines a set of valid vectors or codewords z, such that each component of z is either 0 or 1, and
  • a parity check matrix has N columns and N-k rows, then the matrix usually defines an error-correcting code of block-length N, and transmission rate k/N. If some of the rows are linearly dependent, then some of the parity checks will be redundant and the code will actually have a higher transmission rate.
  • FIG 2 there is a corresponding bipartite graph for each parity check matrix, see R. M. Tanner, "A recursive method to low complexity codes," IEEE Trans. Info. Theory, IT-27, pages 533-547, 1981.
  • a Tanner graph is a bipartite graph with two types of nodes: variable nodes (1-6) denoted by circles 201, and check nodes A, B, and C denoted by squares 202. In a bipartite graph, each check node is connected to all the variable nodes participating in the check. For example, the parity check matrix
  • the graphs representing codes typically include thousands of nodes connected in any number of different ways, and contain many loops (cycles). Evaluating codes defined by such graphs, or designing codes that perform optimally, is very difficult.
  • Error-correcting codes defined by parity check matrices are linear. This means that each codeword is a linear combination of other codewords. In a check matrix, there are 2 possible codewords, each of length TV. For the example given the above, the codewords are 000000, 001011, 010110, 011101, 100110, 101101, 110011 , 111000. Because of the linearity property, any of the codewords are representative, given that the channel is syimnetric between inputs 0 and 1, as in the BEC or BSC.
  • Generalized parity check matrices define many of the modern error-correcting codes, such as turbo-codes, Kanter-Saad codes, and repeat-accumulate codes.
  • additional columns are added to a parity check matrix to represent "hidden" variable nodes.
  • Hidden variable nodes participate in parity checks and help constrain the possible code-words of a code, but they are not sent through the channel.
  • the receiver of a block must decode the bit values without any direct information — in a sense, one can consider all hidden nodes to arrive "erased.”
  • the advantage of hiding variable nodes is that one improves the transmission rate of the code.
  • a good notation for the hidden state variables is a horizontal line above the corresponding columns in the parity- check matrix, e.g., one can write
  • Good examples of such decoding methods are the belief propagation (BP) decoding method for the BEC, and the "Gallager A" decoding method for the BSC, describe in greater detail below.
  • Other examples of such decoders are the quantized belief propagation decoders described in detail by T. Richardson and R. Urbanke in "The Capacity of Low- Density Parity Check Codes Under Message-Passing Decoding " IEEE Trans. Info ⁇ n. Theory, vol 47., pp. 599-618, Feb. 2000.
  • BP decoding works by passing discrete messages between the nodes of the bipartite graph. Each variable node sends a message m ia to each connected check node a. The message represents the state of the variable node / ' . In general, the message can be in one of three states: 1, 0, or ?, but because the all-zeros codeword is always transmitted, the possibility that m ia has a bit value of one can be ignored.
  • check-to-bit messages can, in principle, take on the bit values 0, 1, or ?, but again only the two messages 0 and ? are relevant when the all-zeros codeword is transmitted.
  • a message m ia from a variable node to a check node a is equal to a non-erasure received message because such messages are always correct in the BEC, or to an erasure when all incoming messages are erasures.
  • a message m ai from a check node a to a variable node / ' is an erasure when any incoming message from another node participating in the check is an erasure. Otherwise, the message takes the value of the modulus-2 sum of all incoming messages from other nodes participating in the check.
  • BP decoding is iterative.
  • the iterations are indexed by an integer t, which must be greater than or equal to one.
  • the check-to-variable messages are determined by the standard rule mentioned above.
  • a variable node can be considered decoded if any of its incoming messages is a non-erasure. Such messages must always be correct, so the bit is decoded to the value indicated by the message.
  • messages can only change under the BP decoding process from erasure messages to non-erasure messages, so the iterative decoding process must eventually converge.
  • the decoding method is initialized by each variable node sending a message to every connected check node.
  • the message is 0 or 1 depending on the bit value that was received through the channel.
  • each check nodes send a message to connected variable nodes.
  • This message is 0 or 1 and is interpreted as commands about the state that the variable nodes should 'be in.
  • the message is the modulus-2 sum of the messages that the check node receives from the other variable nodes to which it is connected.
  • each variable node continues to send the received bit value to the connected check nodes, unless the variable node receives sufficient contradictory messages.
  • the variable node sends the message it received from all the other check nodes.
  • the Gallager A decoding method is iterated until some criterion, like a fixed number of iterations, is reached. At every iteration, each bit is decoded to the bit value that the variable node receives from the channel, unless all the incoming messages from the connected check nodes agree on the other bit value.
  • Density evolution is a method for evaluating a parity check code that uses iterative message-passing decoding as described above. Specifically, density evolution can determined the average bit error rate of a code. Density evolution is now described for the case of BP decoding in the BEC. Similar density evolution methods have been derived for other iterative message-passing decoders such that each message can only be in a finite number of discrete states, for example the Gallager A decoding method, as described above, or the quantized belief propagation decoders. In general, the density evolution methods are represented as sets of rules relating the probabilities that each of the messages used in the decoder is in each of its states.
  • a probability, averaged over all possible received blocks, that each message is an erasure is considered.
  • the iterations are indexed by an integer t.
  • a real number Pi a which represents the probability that a message m ia is an erasure at iteration 1, is associated with each message m ia from variable nodes to check nodes.
  • a real number q a i(t) which represents the probability that the message m ai is an erasure at iteration t, is associated with each message m ai from check nodes to variable nodes.
  • probabilities p ia (t) and q a i(t) are dete ⁇ nined in a way that is exact, as long as the bipartite graph representing the error-correcting code has no loops.
  • this rule includes operands x and q, and a multiplication operator.
  • This rule can be derived from the fact that for a message m ia to be an erasure, the variable node must be erased during transmission, and all incoming messages from other check nodes must be erasures as well. Of course, if the incoming messages q ai (t) are statistically dependent, then the rule is not correct. However,
  • the density evolution rules (3) and (4) are evaluated by iteration.
  • bj(t) is the probability of a failure to decode at variable node , from the rule
  • the density evolution rules (3, 4, and 5) are exact when the code has a bipartite graph representation without loops. It is very important to understand that the density evolution rules are not exact when a bipartite graph represents a practical code that does have loops, because in that case, the BP messages are not independent, in contradiction with the assumptions underlying rules (3, 4, and 5).
  • This code has four codewords: 0000, 0011, 1101, and 1110. If the 0000 message is transmitted, then there are sixteen possible received messages: 0000, 000?, 00?0, 00??, 0?00, and so on.
  • the probability of receiving a message with n e erasures is x" e (1 - x) A' " 3 , where we have taken all the x. to be equal to the same value x.
  • x 2 (l-x) 2 +3x 3 (l-x) + x 4 x 2 + x 3 -x 4 .
  • x c 0.51757.
  • the present invention provides a method for evaluating the perfo ⁇ nance of an error-correcting code, represented by an arbitrary generalized parity check matrix, and decoded by an iterative message-passing method that sends discrete messages, subject to a memory-less binary erasure channel or a memory-less binary symmetric channel.
  • a channel noise level Xj which represents the probability of erasure for that bit for the binary erasure channel, or the inversion probability for that bit for the binary symmetric channel.
  • a set of rules corresponding to the prior-art density evolution method for evaluating the perfo ⁇ nance of the error-correcting code is provided.
  • the outputs of the rules are evaluated as real numbers during each iteration.
  • the real number represents the probability that a message is in a particular state during an iteration of the decoder.
  • the invention transforms the prior-art density evolution rules into "projective" rules.
  • the transformation replaces the real numbers representing the possible states of each message at every iteration of the decoder, by "projected" polynomials.
  • a projected polynomial is a polynomial in the x,-, where no term of the polynomial has an exponent of order greater than one.
  • Furthennore ordinary operators in the set of rules corresponding to the prior art density evolution method are replaced with "projective" operators. In a projective operation on two projected polynomials, any resulting exponents of an order greater than one are reduced down to the order of one.
  • the projective transformation results in a set of iterative update rules for the projected polynomials.
  • a set of parameters of interest including the number of iterations, and the variable bits of interest, or specific messages of interest in the decoding method, are specified. Iterating for the specified number of iterations provides exact results for the error rates for the code at the bits or messages of interest.
  • each projected polynomial is only represented by a small number of leading positive terms, and projective multiplication is modified so that only positive leading terms in a projected polynomial are retained.
  • Such a representation is called a "stopping set representation,” and is exact for the case of the memory-less BEC.
  • the projective method can require an amount of memory and computation time that grows exponentially with the block-length of the code.
  • the projected polynomials are approximated by projected polynomials containing a limited number of terms.
  • a lower bound on the probability represented by a projected polynomial is obtained by retaining only a sub-set of the terms in the exact projected polynomial.
  • a lower bound is obtained even more efficiently by retaining only a sub-set of the terms in the stopping set representation of the projected polynomial.
  • an upper bound is obtained by using the stopping set representation of projected polynomials, and successively replacing pairs of terms by another term containing the intersection of the nodes involved in the pair of terms.
  • the present invention can be used to optimize error- correcting codes by searching for the error-correcting code of a specified data block size and transmission rate with the best performance in terms of decoding failure as a function of noise.
  • the decoding failure rates for transmitted variable bits are used to guide the search for the optimal code.
  • Exact patterns of noisy transmitted bits that cause a decoding failure can also be obtained by the method, and are also used to guide the search for an optimal code.
  • Figure 1 is a block diagram for the problem of decoding an encoded message transmitted through a noisy channel
  • Figure 2 is a bipartite graph representing a simple error-correcting code
  • Figure 3 is a block diagram of a projective analysis method for evaluating the performance of error-correcting codes according to the invention
  • Figure 4 is a bipartite graph to be evaluated according to the invention.
  • Our invention evaluates the perfo ⁇ nance of error-co ⁇ ecting codes (ECC) decoded by a message-passing decoding process for which every message can be in a finite number of different discrete states.
  • ECC error-co ⁇ ecting codes
  • Our method tracks, at each iteration of a decoding process, the probability that each message sent by the decoding process is in any of its possible states.
  • our method is exact for any code represented by a parity-check matrix, including codes whose bipartite graph representation contains cycles (loops).
  • the operations that are perfo ⁇ ned on the projective polynomials must be "projective" in the sense that any terms in the resulting polynomials that contain exponents greater than one have those exponents reduced to one.
  • the probabilities of interest are retained as projected polynomials until their actual value is desired, at which point they are evaluated by replacing all the parameters ⁇ ; with the appropriate values.
  • Figure 3 shows the projective evaluation method 300 of our invention.
  • a given error-correcting code 301 is represented by a generalized parity check matrix.
  • a given channel 302 is either the binary erasure channel (BEC) or the binary symmetric channel (BSC.)
  • BEC binary erasure channel
  • BSC binary symmetric channel
  • a given decoder 303 uses an iterative message- passing method such that each message can be in a finite number of discrete states.
  • construct 310 a set of density evolution rules 311 representing a density evolution (D.E.) method for the code to be decoded after transmission through the channel.
  • D.E. density evolution
  • This set of iterative density evolution rules 311 represent relations between probabilities of states of every message in the decoder 302.
  • the set of rules also depend explicitly on parameters x h which represent a probability that an th bit is erased in the BEC, or inverted in the BSC.
  • the set of rules 311 derived for the density evolution method, as described above, are transformed 320 into a co ⁇ esponding set of projective analysis (P .) rules 321 for the method and system 300 according to our invention.
  • the transformation 320 replaces each real-valued variable, that is, an operand representing a probability in the density evolution rules 311, with a transfo ⁇ ned variable in the projective rules 321.
  • the operands in the projective analysis rules 321 are in the form of "projected polynomials,” described in further detail below.
  • each operator in the density evolution rules 311 is replaced with a "projective" operation that will also be further described below.
  • the resulting projective rules 321 are then used to evaluate 330 the code 301, given the channel 302 and the decoder 303.
  • the projective rules 321 can be used to obtain error rates at any bit, or the probabilities that any message is in any of its states, at any iteration of the decoding process. If desired, further information can be obtained, including the exact patterns of bit erasures or inversions that cause a decoding failure.
  • the evaluation 330 iteratively applies the rules 321 to update the values of the projected polynomials.
  • the projected polynomials represent the probability that a message will be in any of its states, or the probability that any of the bits will be decoded erroneously.
  • the iteration stops when a te ⁇ nination condition is reached.
  • error rates are evaluated as real numbers for selected or all bits of the code 301. Thus, for example bit error rates averaged over all bits can be dete ⁇ nined.
  • the results 360 of the evaluation 330 can then be passed to an optimizer 370 to generate an optimized error-correcting code 390, as described in further detail below.
  • Projected polynomials are a "field" under ordinary addition and projective multiplication. This means that we can use the commutative, associative, and distributive laws.
  • the discrete values 1 or 0 indicate whether or not a received bit was erased.
  • the probability of a fundamental event is a multiplication of te ⁇ ns such as x, , or (1 -x, ) .
  • E x n E 2 denotes the event such that both the events E, and E 2 occur.
  • the event E x is set of fundamental events that belong to the intersection of
  • Rule (28) is only valid for statistically independent events, but rule (27), using a projective multiplication instead of an ordinary multiplication, is valid for any two events, subject to our scenario that events are constructed from fundamental events that are joint states of TV statistically independent binary variables.
  • E the event where either of the events E, orE 2 , or both, occur.
  • the event E, u£ 2 includes the fundamental events that belong to the union of the fundamental events in E x and inE 2 .
  • Rule (29) follows from rule (27) and the general laws of probability.
  • rules (27) and (29) exactly determine the probabilities that two events will both occur, or that at least one of them will occur, even if the two events are statistically dependent. These rules are sufficient to exactly determine the perfo ⁇ nance of e ⁇ or-co ⁇ ecting codes decoded by iterative decoding methods in the B ⁇ C or BSC.
  • Our method 300 can be applied to any parity-check code, decoded by an iterative message-passing decoding method, with a finite number of discrete states for each message, in the memory-less BEC and memory-less BSC channels.
  • the probability p ia (t) is a real number
  • Pi a (t) is a projected polynomial as a function of the x
  • a projected polynomial q a i(t) to represent the probability that the message m ai is an erasure at iteration t
  • the projected polynomial b, (t) to represent the probability that the node is decoded as an
  • our evaluation method 300 is exact for any parity- check code when the probabilities in the rules 321 are presented by projected polynomials, rather than real numbers, and when ordinary multiplications are replaced by projective multiplications.
  • the values of the projected polynomials p ja (t) from the projected polynomials q ai (t) we use the
  • rule (30) can be understood as follows.
  • a variable node i sends an erasure message to a check node a when it is erased by the channel, which occurs with probability x, , and all incoming messages from neighboring check
  • rule (27) calls for a projective multiplication.
  • Rule (30) is co ⁇ ect even when the different incoming messages q ai (t) are statistically dependent.
  • Rule (31) is correct because a message from a check node a to a variable node i is an erasure when any of the incoming messages to node a is an erasure.
  • Rule (32) is correct because a variable node is decoded incorrectly only if it is erased by the channel and all incoming messages are also erasures.
  • the projective rules 321 we apply rules (30, 31, and 32) and specify a set of input parameters 331.
  • the set of parameters of interest can include the number of iterations L, the set of variable bits of interest B, or the set of specific messages of interest M in the decoding method 303.
  • the evaluation 330 produce exact results 360 for the code 301.
  • we can input a code 301, channel 302, and decoder 303, and obtain as output an exact prediction of the bit e ⁇ or rate at any selected nodes of the bipartite graph, i.e., bit positions.
  • Figure 4 shows how the projective method 300 can correctly evaluate the perfo ⁇ nance of an e ⁇ or-correcting code that is evaluated inco ⁇ ectly by the prior art density evolution method.
  • e ⁇ or-correcting code that is evaluated inco ⁇ ectly by the prior art density evolution method.
  • This parity-check code has two variable bits 401, and two constraining check bits 402, one of which is clearly redundant.
  • the redundant constraint has been introduced to induce a loop in the bipartite graph 400 for the code. Larger practical codes no ⁇ nally have loops even when there are no redundant constraints.
  • probabilities of erasure may be, and normally are, equal, but we allow them to be different so that we can compare in detail with the results from the projective method.
  • the probability of receiving a "00" block is (l- x,)(l - x 2 ) ; the probability of
  • receiving a "0?" block is (l -x,)x 2 ; the probability of receiving a "?0" block is
  • the first variable node initially sends the two check nodes a '0' message, while the second variable node sends the two check nodes an erasure message. Then, at the first iteration, the check nodes send the first variable node an erasure message, and send the second variable node a '0' message.
  • the block is successfully decoded after one iteration, because the first bit will receive a '0' message over the channel, and the second bit will receive a c 0' message from both check nodes connected to the first bit. No further erasure messages are sent.
  • the analysis for the "?0" block is similar, except that the role of the first and second bit is reversed.
  • the offending power of two is reduced back down to one.
  • the density evolution method result becomes progressively more incorrect because it multiplies probabilities of erasures as if they were independent, and therefore with each iteration they inco ⁇ ectly appear to become less probable.
  • a similar phenomenon occurs in the density evolution analysis of regular Gallager codes.
  • the projected polynomials can be stored in a more efficient form, which we call the "stopping set representation," and a co ⁇ esponding change is made in the projective operations.
  • a general projected polynomial that arises in the projective method for BP decoding on the BEC can be expanded into a sum of terms, each of which is product of x, , with integer coefficients.
  • a "minimal term" in a projected polynomial is one that depends on a set of nodes that does not have any sub-set of nodes such that the sub-set of nodes is the set of nodes depended upon by another term in the projected polynomial.
  • m(x ,x 2 ,x 3 ) x ⁇ x 3 + ⁇ 2 -X j X 2 x 3
  • the te ⁇ ns x,x 3 and ⁇ 2 are minimal te ⁇ ns, but the term ⁇ ⁇ 2 ⁇ 3 is not a minimal te ⁇ n
  • the full projected polynomial can be reconstructed from a list of the minimal te ⁇ ns.
  • the search according to the invention improves on the results obtained using the prior art density evolution process.
  • info ⁇ nation about the bit error rate at every node For example, it might make sense to "strengthen” a weak variable node with a high bit error rate by adding additional parity check nodes, or we can "weaken" strong nodes with a low bit e ⁇ or rate by turning the weak nodes into hidden nodes, thus increasing the transmission rate.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)
  • Detection And Correction Of Errors (AREA)

Abstract

A method evaluates and optimizes an error-correcting code to be transmitted through a noisy channel and to be decoded by an iterative message-passing decoder. The error-correcting code is represented by a parity check matrix which is modeled as a bipartite graph having variable nodes and check nodes. A set of message passing rules is provided for the decoder. The decoder is analyzed to obtain a set of density evolution rules including operators and operands which are then transformed to projective operators and projected operands to generate a set of projective message passing rules. The projective message passing rules are applied iteratively to the error-correcting code modeled by the bipartite graph until a termination condition is reached. Error rates of selected bits of the error-correcting code are then determined by evaluating the corresponding operands. The error rates can be passed to an optimizer to optimize the error-correcting code.

Description

DESCRIPTION
EVALUATING AND OPTIMIZING ERROR-CORRECTING
CODES USING PROJECTIVE ANALYSIS
TECHNICAL FIELD
The present invention relates generally to the field of error-correcting codes for data storage and data transmission, and more particularly to evaluating and optimizing error-correcting codes.
BACKGROUND ART
A fundamental problem in the field of data storage and communication is the evaluation of error-correcting codes (ECC). The general framework for the problem that our invention addresses is shown in Figure 1. A source 110 intends to transmit a block of k bits, denoted by a vector u 111, to a destination 150. The source 110 adds redundancy bits to the source symbols by passing them through an encoder 120. The output of the encoder is a block of N bits, denoted by a vector x 121. The block of N bits pass through a channel 130, subject to noise 135, where the block is possibly corrupted into another block of N output symbols, denoted by a vector y . The output of the channel is then decoded 140 into a received block denoted by vector v of k bits 141 for the destination 150.
Ideally, the received block 141 will match the transmitted block 111. However, in practical conditions, some decoding failures will sometimes occur. A decoding failure called a block error occurs when at least one bit in the received block disagrees with a bit in the transmitted block. The block error rate is the probability that at least one bit of the transmitted block will be received in error, averaged over the probability distribution of transmitted blocks. In many circumstances, a better measure of merit is the bit error rate, which is the probability that any given bit will be received in error. A main objective of evaluating error-correcting codes is to determine their bit-error rates.
Error-correcting codes described by sparse generalized parity-check matrices have recently been the subject of intense theoretical interest. These types of codes were first described by R. G. Gallager, in "Low-density parity check codes," Nol.21, Research Monograph Series, MIT Press, 1963, but were not properly appreciated until recently. In the last decade, however, a variety of improved codes defined by sparse generalized parity check matrices have been described, such as turbocodes, irregular low-density parity check (LDPC) codes, Kanter-Saad codes, repeat-accumulate codes, and irregular repeat-accumulate codes. These improved codes have three particularly noteworthy advantages. First, the codes can be decoded efficiently using message-passing iterative decoding methods, which are sometimes called "belief propagation" (BP) methods. Second, the performance of these codes can often be theoretically evaluated using a density evolution method, at least in the infinite block-length limit. Third, by using the density evolution method, it can be demonstrated that these codes are nearly optimal codes, when decoded using BP. In particular, in the infinite block-length limit, the density evolution method typically demonstrates that BP decoding correctly recovers all data blocks that have a noise level below some threshold level, and that threshold level is often not far from the Shannon limit. For a collection of reports on such codes and their associated BP decoding method, see the Special Issue on Codes on Graphs and Iterative Algorithms, IEEE Transactions on Information Theory, February, 2001.
The density evolution method dates back to R. G. Gallager's work, already cited above. The density evolution method was re-introduced in the context of the memory-less BEC by M. Luby, M. Mitzenmacher, M. A. Shokrollahi, D. Spielman, and N. Stemann, in "Practical Loss-Resilient Codes " Proceedings 29 Annual ACM Symposium on the Theory of Computing, 1997, pp. 150-159. The name "density evolution" was actually introduced when that method was generalized to other channels, including the memory-less binary symmetric channel (BSC), by T. Richardson and R. Urbanke in "The Capacity of Low- Density Parity Check Codes Under Message-Passing Decoding " IEEE Trans. Inform. Theory, vol 47., pp. 599-618, Feb. 2000. An important drawback of the density evolution method is that it only becomes exact when the graphical representation of the code, known as the bipartite or bipartite graph, has no cycles. Fortunately, it has been shown for a variety of codes, that in the limit where the block-length N of the code approaches infinity, the presence of cycles in the bipartite graph representation can be ignored, and the results of the density evolution method become exact. Because all the best- performing codes do have cycles in their bipartite graph representations, this means that in practice, application of the density evolution method is restricted to some codes in the limit where their block-length TV approaches infinity.
Given a method to measure the performance of a code, one can design error- correcting codes that optimize the performance. A preferred prior art way for designing improved codes that will be decoded with BP decoding, has been to optimize certain classes of codes for the infinite block-length limit using the density evolution method, and hope that a scaled-down version still results in a near optimal code. See, for example, T. Richardson, and R. Urbanke, in "Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes " IEEE Trans. Inform. Theory, vol. 47., pp. 619-637, Feb. 2000.
The problem with this method is that even for vary large block-lengths, such as blocks of length TV < 10 , one is still noticeably far from the infinite-block-length limit. In particular, many decoding failures are found at noise levels far below the threshold level predicted by infinite block-length calculations. Furthermore, there may not necessarily even exist a way to scale down the codes derived from the density evolution method. For example, the best known irregular LDPC codes, at a given rate in the N — ∞ limit, often have bits that participate in hundreds or even thousands of parity checks, which makes no sense when the overall number of parity checks is 100 or less.
Codes with intermediate block-length, for example block-length less than 10 , are important for many applications. Therefore, there is a need for a practical method to directly evaluate the performance of arbitrary intermediate block- length error-correcting codes as decoded by BP decoding methods. Aside from its utility as a part of a code design method, such a method could be used for code verification. The performance of BP decoding of parity-check codes is currently normally judged by "Monte Carlo" simulations, which randomly generate thousands or millions of noisy blocks.
Unfortunately, such simulations become impractical as a code-verification technique when the decoding failure rate is required to be extraordinarily small, as in, for example, magnetic disk drive or fiber-optical channel applications. This is a serious problem in evaluating turbo-codes or LDPC codes, which often suffer from an "error-floor" phenomenon, which is hard to detect if the error- floor is at a sufficiently low decoding failure rate.
Furthermore, it is desired that the method can be used when the channel is a binary erasure channel or a binary syimnetric channel. Therefore, those channels, parity-check codes, iterative decoding methods, and the density evolution method are described in greater detail.
The Binary Erasure Channel (BEC) and Binary Symmetric Channel (BSC)
A binary erasure channel (BEC) is a binary input channel with two input symbols, 0 and 1, and with three output symbols: 0, 1, and an erasure, which can be represented by a question mark "?." A bit that passes through the channel will be received correctly with probability 1-x, and will be received as an erasure with probability x.
A binary symmetric channel (BSC) is a binary input channel with two input symbols, 0 and 1, and with two output symbols: 0, and 1. A bit will pass through the channel and be correctly received in its transmitted state with probability 1-x, and will be incorrectly inverted into the other state with probability x.
The method should be applicable to a memory-less version of the BEC and BSC. In a memory-less channel, each bit is erased or inverted independently of every other bit. Many practical channels are memory-less to a good approximation. In any case, the memory-less BEC and memory-less BSC are excellent test-beds for evaluating and designing new error-correcting codes, even when they are not always realistic practical models. One could assume that the probability of erasure for the BEC or the inversion probability for the BSC is identical for every bit, because that is the normal realistic situation. However, it will be convenient to let the erasure probability or inversion probability depend explicitly on the bit position within the block. Thus, the bits in a block are indexed with the letter , and the erasure probability in the BEC, or the inversion probability in the BSC, of the 7th bit is taken to be x, . The
probability x, for all the bits can ultimately be set equal at the end of the analysis,
if so desired.
Parity Check Codes
Linear block binary error-correcting codes can be defined in terms of a parity check matrix. In a parity check matrix A, the columns represent transmitted variable bits, and the rows define linear constraints or checks between the variable bits. More specifically, the matrix A defines a set of valid vectors or codewords z, such that each component of z is either 0 or 1, and
Az = 0 , (1) where all multiplications and additions are modulo 2.
If a parity check matrix has N columns and N-k rows, then the matrix usually defines an error-correcting code of block-length N, and transmission rate k/N. If some of the rows are linearly dependent, then some of the parity checks will be redundant and the code will actually have a higher transmission rate. As shown in Figure 2, there is a corresponding bipartite graph for each parity check matrix, see R. M. Tanner, "A recursive method to low complexity codes," IEEE Trans. Info. Theory, IT-27, pages 533-547, 1981. A Tanner graph is a bipartite graph with two types of nodes: variable nodes (1-6) denoted by circles 201, and check nodes A, B, and C denoted by squares 202. In a bipartite graph, each check node is connected to all the variable nodes participating in the check. For example, the parity check matrix
Figure imgf000009_0001
is represented by the bipartite graph shown in Figure 2.
In practical applications, the graphs representing codes typically include thousands of nodes connected in any number of different ways, and contain many loops (cycles). Evaluating codes defined by such graphs, or designing codes that perform optimally, is very difficult.
Error-correcting codes defined by parity check matrices are linear. This means that each codeword is a linear combination of other codewords. In a check matrix, there are 2 possible codewords, each of length TV. For the example given the above, the codewords are 000000, 001011, 010110, 011101, 100110, 101101, 110011 , 111000. Because of the linearity property, any of the codewords are representative, given that the channel is syimnetric between inputs 0 and 1, as in the BEC or BSC.
For the purposes of evaluating a code, it is normally assumed that the all-zeros codeword is transmitted.
Generalized Parity Check Matrices
Generalized parity check matrices define many of the modern error-correcting codes, such as turbo-codes, Kanter-Saad codes, and repeat-accumulate codes. In a generalized parity check matrix, additional columns are added to a parity check matrix to represent "hidden" variable nodes. Hidden variable nodes participate in parity checks and help constrain the possible code-words of a code, but they are not sent through the channel. Thus, the receiver of a block must decode the bit values without any direct information — in a sense, one can consider all hidden nodes to arrive "erased." The advantage of hiding variable nodes is that one improves the transmission rate of the code. A good notation for the hidden state variables is a horizontal line above the corresponding columns in the parity- check matrix, e.g., one can write
Figure imgf000010_0001
to indicate a code where the first variable node is a hidden node. To indicate that a variable node is a hidden node, an open circle is used, rather than a filled-in circle. Such a graph, which generalizes bipartite graphs, is called a "Wiberg graph," see N. Wiberg, "Codes and decoding on general graphs " Ph. D. Thesis, University of Linkoping," 1996, and N. Wiberg et al., "Codes and iterative decoding on general graphs," Euro. Trans. Telecomm, Vol. 6, pages 513-525, 1995.
Iterative Message-Passing Decoding
It is desired to provide a method for evaluating the perfoπnance of an iterative message-passing decoder for a code where each possible state of each message can take on a finite number of discrete values. Good examples of such decoding methods are the belief propagation (BP) decoding method for the BEC, and the "Gallager A" decoding method for the BSC, describe in greater detail below. Other examples of such decoders are the quantized belief propagation decoders described in detail by T. Richardson and R. Urbanke in "The Capacity of Low- Density Parity Check Codes Under Message-Passing Decoding " IEEE Trans. Infoπn. Theory, vol 47., pp. 599-618, Feb. 2000.
Belief Propagation Decoding in the BEC
It is important to note that the BEC never inverts bits from 0 to 1 , or vice versa. If all-zeros codewords are transmitted, the received word must therefore consist entirely of zeros and erasures. For the case of the BEC, BP decoding works by passing discrete messages between the nodes of the bipartite graph. Each variable node sends a message mia to each connected check node a. The message represents the state of the variable node /'. In general, the message can be in one of three states: 1, 0, or ?, but because the all-zeros codeword is always transmitted, the possibility that mia has a bit value of one can be ignored.
Similarly, there is a message m„ sent from each check node a to all the variable nodes connected to the check node. These messages are interpreted as directives from the check node a to the variable node i about what state the variable node should be in. This message is based on the states of the other variable nodes connected to the check node. The check-to-bit messages can, in principle, take on the bit values 0, 1, or ?, but again only the two messages 0 and ? are relevant when the all-zeros codeword is transmitted.
In the BP decoding process for the BEC, a message mia from a variable node to a check node a is equal to a non-erasure received message because such messages are always correct in the BEC, or to an erasure when all incoming messages are erasures. A message mai from a check node a to a variable node /' is an erasure when any incoming message from another node participating in the check is an erasure. Otherwise, the message takes the value of the modulus-2 sum of all incoming messages from other nodes participating in the check.
BP decoding is iterative. The iterations are indexed by an integer t, which must be greater than or equal to one. At the first iteration, when t = 1 , the variable-to- check node messages are initialized so that all variable nodes that are not erased by the channel send out messages equal to the corresponding received bit. Then, the check-to-variable messages are determined by the standard rule mentioned above. At the end of the first iteration, a variable node can be considered decoded if any of its incoming messages is a non-erasure. Such messages must always be correct, so the bit is decoded to the value indicated by the message.
At each subsequent iteration, one first updates all the messages from variable nodes to check nodes, and then one updates all the messages from check nodes to variable nodes, and then checks each bit to see whether it has been decoded. One stops iterating when some criterion is reached, for example, after a fixed number of iterations, or after the messages converge to stationary states. For the particularly simple BEC, messages can only change under the BP decoding process from erasure messages to non-erasure messages, so the iterative decoding process must eventually converge.
The "Gallager A" Decoding Method for the BSC
The "Gallager A" decoding method for the BSC was first described by R. G. Gallager, in "Low-density parity check codes " Vol.21, Research Monograph Series, MIT Press, 1963. It works as follows. As in BP decoding for the BEC, there are two classes of messages: messages from variable nodes to check nodes; and messages from check nodes to the variable nodes. However, the meaning of the messages is slightly different.
The decoding method is initialized by each variable node sending a message to every connected check node. The message is 0 or 1 depending on the bit value that was received through the channel. In turn, each check nodes send a message to connected variable nodes. This message is 0 or 1 and is interpreted as commands about the state that the variable nodes should 'be in. In particular, the message is the modulus-2 sum of the messages that the check node receives from the other variable nodes to which it is connected.
In further iterations of the Gallager A decoding method, each variable node continues to send the received bit value to the connected check nodes, unless the variable node receives sufficient contradictory messages. In particular, if all the other connected check nodes, aside from the check node that is the recipient of the message, send a variable node a message that contradicts the bit value it received from the channel, then the variable node sends the message it received from all the other check nodes.
The Gallager A decoding method is iterated until some criterion, like a fixed number of iterations, is reached. At every iteration, each bit is decoded to the bit value that the variable node receives from the channel, unless all the incoming messages from the connected check nodes agree on the other bit value.
Density Evolution
Density evolution is a method for evaluating a parity check code that uses iterative message-passing decoding as described above. Specifically, density evolution can determined the average bit error rate of a code. Density evolution is now described for the case of BP decoding in the BEC. Similar density evolution methods have been derived for other iterative message-passing decoders such that each message can only be in a finite number of discrete states, for example the Gallager A decoding method, as described above, or the quantized belief propagation decoders. In general, the density evolution methods are represented as sets of rules relating the probabilities that each of the messages used in the decoder is in each of its states.
For the case of the density evolution method for BP decoding in the BEC, a probability, averaged over all possible received blocks, that each message is an erasure, is considered. The iterations are indexed by an integer t. A real number Pia ), which represents the probability that a message mia is an erasure at iteration 1, is associated with each message mia from variable nodes to check nodes. Similarly, a real number qai(t), which represents the probability that the message mai is an erasure at iteration t, is associated with each message mai from check nodes to variable nodes. In the density evolution method, probabilities pia(t) and qai(t) are deteπnined in a way that is exact, as long as the bipartite graph representing the error-correcting code has no loops.
A "rule" that determines the probability pia(t) is
Figure imgf000015_0001
where y' e N(i)\a represents all check nodes directly connected to a neighboring variable node /, except for the check node a. Note, that in density evolution, this rule includes operands x and q, and a multiplication operator. This rule can be derived from the fact that for a message mia to be an erasure, the variable node must be erased during transmission, and all incoming messages from other check nodes must be erasures as well. Of course, if the incoming messages qai(t) are statistically dependent, then the rule is not correct. However,
in the density evolution method, such dependencies are systematically ignored. In a bipartite graph with no loops, each incoming message is in fact independent of all other messages, so the density evolution method is exact.
Similarly, the rule
Figure imgf000016_0001
can be derived from the fact that a message mai can only be in a non-erasure state when all incoming messages are in a non-erasure state, again ignoring statistical dependencies between the incoming messages pja (t) .
The density evolution rules (3) and (4) are evaluated by iteration. The appropriate initialization is
Figure imgf000016_0002
= ,- for all messages from variable nodes to check nodes. At each iteration t, it is possible to determine bj(t), which is the probability of a failure to decode at variable node , from the rule
Figure imgf000016_0003
In other words, the rules (3, 4, and 5) enable one to evaluate the code in terms of its bit error rate. Exact Solution of a Small Code
As stated above, the density evolution rules (3, 4, and 5) are exact when the code has a bipartite graph representation without loops. It is very important to understand that the density evolution rules are not exact when a bipartite graph represents a practical code that does have loops, because in that case, the BP messages are not independent, in contradiction with the assumptions underlying rules (3, 4, and 5).
Consider, as an example of a bipartite graph with no loops, the error-correcting code defined by a parity check matrix
Figure imgf000017_0001
and represented by a corresponding bipartite graph shown in Figure 2. This code has four codewords: 0000, 0011, 1101, and 1110. If the 0000 message is transmitted, then there are sixteen possible received messages: 0000, 000?, 00?0, 00??, 0?00, and so on. The probability of receiving a message with ne erasures is x"e (1 - x)A'"3 , where we have taken all the x. to be equal to the same value x.
It is easy to determine the exact probability that a given bit remains an erasure after t iterations of decoding have completed by summing over the decoding results for all the sixteen possible received messages weighted by their probabilities. For example, after decoding to convergence, the first bit will only fail to decode to a 0 when one of the following messages is received: ???0, ??0?, or ????, so the exact probability that the first bit will not decode to a 0 is 2ΛΓ (1 -
If the focus is on the last bit, then the bit will ultimately be correctly decoded, unless one of the following messages is sent: 00??, 0???, ?0??, ??0? or ????. Therefore, the overall probability that the fourth bit is not correctly decoded is x2(l - xf + 3x3(l - x) + x4 = x2 + x3 - x4.
In the density evolution method, applied to this code, the values for the following variables:
Pi 1 ( , P2 ( > Pl2 (0. P32 (0, PΛ201 a\ 1 ( , 012 ( » 922 (0. ?23 (0.024 (0. b\ ( » (t), b, (t), b4 (t) , are determined by
Pι = χ (7)
PΑ (t + ϊ) = xq72(t) (8)
p22(t + l) = xqn(t) (9)
P*(?) = x (10)
p42(t) = x (11)
*»(') = Λ, C) (12)
Figure imgf000018_0001
22( = - -A2( Xi - P ( ) (14) ^( = 1-0-^( )0-^( ) (15)
q24 (0 = 1 -0-^(0)0-^(0) (16) and
Figure imgf000019_0001
*3(0 = <723(0 (19)
b4(0 = ^24(0 (20) with the initial conditions that
Pu(' = l) = Aι(' = l) = JP22(' = 1) = Λ2('=1) = P42 = !) = * ( 1)
Solving these rules yields the exact bit error rates at every iteration. For example, one can find that b (t ~ 1) = 2x2 - x3 and b4 (t ≥ 2) = x2 + x3 - x4. These results
correspond to the fact that the 00??, 0?0?, 0???, ?0??, ??0? and ???? messages will not be decoded to a zero at the fourth bit after the first iteration so that the probability of decoding failure at the fourth bit is
2x2 (1 - x)2 + 3x3 (1 - x) + x" = 2x2 - x3 ; but after two or more iterations, the 0?0? is
decoded correctly, so the probability of decoding failure at the fourth bit is
x2(l-x)2 +3x3(l-x) + x4 =x2 + x3 -x4.
The Large Block-Length Limit If all local neighborhoods in the bipartite graph are identical, the density evolution rules can be simplified. For example, consider a regular Gallager code, which is represented by a sparse random parity check matrix characterized by the restriction that each row contains exactly dc ones, and each column contains exactly dv ones. In that case, it can be assumed that all the pia(l) are equal to the same value p(t), all the qai(t) are equal to the same value q(t), and all the bft) are equal to the same value b(t). Then,
P(t + l) = xq(t)^ (22)
Figure imgf000020_0001
and
Figure imgf000020_0002
which are the density evolution rules for (dv, dc) regular Gallager codes, valid in the TV -» oo limit.
The intuitive reason that these rules are valid, in the infinite block-length limit, is that as TV - oo , the size of typical loops in the bipartite graph representation of a regular Gallager code go to infinity. As a result, all incoming messages to a node are independent, and a regular Gallager code behaves like a code defined on a graph without loops. Solving rules (22, 23, and 24) for specific values of dv and dc yields a solution p(t → ∞) - q(t → ∞) = b(t -» ∞) = o , below a critical erasure
value, known as the "threshold," of xc. This means that decoding is perfect for erasure probabilities below the threshold. Above xc, b(t ->• ∞) has a non-zero limit, which correspond to decoding failures. The value xc is easy to determine numerically. For example, if dv = 3 and dc = 5, then xc « 0.51757.
These determinations of the threshold at infinite block-length can be generalized to irregular Gallager codes, or other codes like irregular repeat-accumulate codes that have a finite number of different classes of nodes with different neighborhoods. In this generalization, one can derive a system of rules, typically with one rule for the messages leaving each class of node. By solving the system of rules, one can again find a critical threshold xc, below which decoding is perfect. Such codes can thus be optimized, in the TV — » °o limit, by finding the code that has maximal noise threshold xc. This is the way the prior-art density evolution method has been utilized to evaluate error-correcting codes.
Drawbacks of the Density Evolution Method
Unfortunately, the conventional density evolution method is erroneous for codes with finite block-lengths whose graphical representation has loops. One might think that it is possible to solve rules (3, 4 and 5) for any finite code, and hope that ignoring the presence of loops is not too important a mistake. However, this does not work out, as can be seen by considering regular Gallager codes. Rules (3, 4, and 5) for a finite block-length regular Gallager code have exactly the same solutions as one would find in the infinite-block-length limit, so one would not predict any finite-size effects. However, it is known that the real performance of finite-block-length regular Gallager codes is considerably worse than that predicted by such a naive method. In practice, the application of the density evolution method is limited to codes whose block-length is very large, so that the magnitude of the error is not too great.
Therefore, there is a need for a method to correctly evaluate finite length error- correcting codes that do not suffer from the problems of the prior art evaluation methods.
DISCLOSURE OF INVENTION
Summary of the Invention
The present invention provides a method for evaluating the perfoπnance of an error-correcting code, represented by an arbitrary generalized parity check matrix, and decoded by an iterative message-passing method that sends discrete messages, subject to a memory-less binary erasure channel or a memory-less binary symmetric channel.
Associated with the 7th variable bit of the code is a channel noise level Xj, which represents the probability of erasure for that bit for the binary erasure channel, or the inversion probability for that bit for the binary symmetric channel.
A set of rules corresponding to the prior-art density evolution method for evaluating the perfoπnance of the error-correcting code is provided. In the prior art density evolution method, the outputs of the rules are evaluated as real numbers during each iteration. The real number represents the probability that a message is in a particular state during an iteration of the decoder.
The invention transforms the prior-art density evolution rules into "projective" rules. The transformation replaces the real numbers representing the possible states of each message at every iteration of the decoder, by "projected" polynomials. A projected polynomial is a polynomial in the x,-, where no term of the polynomial has an exponent of order greater than one. Furthennore, ordinary operators in the set of rules corresponding to the prior art density evolution method are replaced with "projective" operators. In a projective operation on two projected polynomials, any resulting exponents of an order greater than one are reduced down to the order of one.
The projective transformation results in a set of iterative update rules for the projected polynomials. A set of parameters of interest, including the number of iterations, and the variable bits of interest, or specific messages of interest in the decoding method, are specified. Iterating for the specified number of iterations provides exact results for the error rates for the code at the bits or messages of interest.
For the case of the memory-less BEC, a more efficient implementation can be used, wherein each projected polynomial is only represented by a small number of leading positive terms, and projective multiplication is modified so that only positive leading terms in a projected polynomial are retained. Such a representation is called a "stopping set representation," and is exact for the case of the memory-less BEC.
The projective method can require an amount of memory and computation time that grows exponentially with the block-length of the code. To obtain results using a reasonable amount of memory and computation time for codes with long block-lengths, the projected polynomials are approximated by projected polynomials containing a limited number of terms. A lower bound on the probability represented by a projected polynomial is obtained by retaining only a sub-set of the terms in the exact projected polynomial. For the BEC, a lower bound is obtained even more efficiently by retaining only a sub-set of the terms in the stopping set representation of the projected polynomial. For the BEC, an upper bound is obtained by using the stopping set representation of projected polynomials, and successively replacing pairs of terms by another term containing the intersection of the nodes involved in the pair of terms.
In a practical application, the present invention can be used to optimize error- correcting codes by searching for the error-correcting code of a specified data block size and transmission rate with the best performance in terms of decoding failure as a function of noise. The decoding failure rates for transmitted variable bits are used to guide the search for the optimal code. Exact patterns of noisy transmitted bits that cause a decoding failure can also be obtained by the method, and are also used to guide the search for an optimal code. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram for the problem of decoding an encoded message transmitted through a noisy channel;
Figure 2 is a bipartite graph representing a simple error-correcting code;
Figure 3 is a block diagram of a projective analysis method for evaluating the performance of error-correcting codes according to the invention;
Figure 4 is a bipartite graph to be evaluated according to the invention;
BEST MODE FOR CARRYING OUT THE INVENTION
Introduction
Our invention evaluates the perfoπnance of error-coπecting codes (ECC) decoded by a message-passing decoding process for which every message can be in a finite number of different discrete states. Our method tracks, at each iteration of a decoding process, the probability that each message sent by the decoding process is in any of its possible states. In contrast to the prior-art density evolution method, our method is exact for any code represented by a parity-check matrix, including codes whose bipartite graph representation contains cycles (loops).
Our method represents each probability of interest by a "projected polynomial," in contrast to the prior-art density evolution method, which uses real numbers to represent the probabilities. By a projected polynomial we mean a polynomial in the fonn x; , where no term of the polynomial has any exponents of order greater
than one.
Moreover, the operations that are perfoπned on the projective polynomials must be "projective" in the sense that any terms in the resulting polynomials that contain exponents greater than one have those exponents reduced to one. The probabilities of interest are retained as projected polynomials until their actual value is desired, at which point they are evaluated by replacing all the parameters χ; with the appropriate values.
System Overview
Figure 3 shows the projective evaluation method 300 of our invention. A given error-correcting code 301 is represented by a generalized parity check matrix. A given channel 302 is either the binary erasure channel (BEC) or the binary symmetric channel (BSC.) A given decoder 303 uses an iterative message- passing method such that each message can be in a finite number of discrete states. For the given code 301, channel 302, and decoder 303, construct 310 a set of density evolution rules 311 representing a density evolution (D.E.) method for the code to be decoded after transmission through the channel.
This set of iterative density evolution rules 311 represent relations between probabilities of states of every message in the decoder 302. The set of rules also depend explicitly on parameters xh which represent a probability that an th bit is erased in the BEC, or inverted in the BSC. The set of rules 311 derived for the density evolution method, as described above, are transformed 320 into a coπesponding set of projective analysis (P .) rules 321 for the method and system 300 according to our invention.
The transformation 320 replaces each real-valued variable, that is, an operand representing a probability in the density evolution rules 311, with a transfoπned variable in the projective rules 321. The operands in the projective analysis rules 321 are in the form of "projected polynomials," described in further detail below. Also, each operator in the density evolution rules 311 is replaced with a "projective" operation that will also be further described below.
The resulting projective rules 321 are then used to evaluate 330 the code 301, given the channel 302 and the decoder 303. The projective rules 321 can be used to obtain error rates at any bit, or the probabilities that any message is in any of its states, at any iteration of the decoding process. If desired, further information can be obtained, including the exact patterns of bit erasures or inversions that cause a decoding failure.
The evaluation 330 iteratively applies the rules 321 to update the values of the projected polynomials. The projected polynomials represent the probability that a message will be in any of its states, or the probability that any of the bits will be decoded erroneously. The iteration stops when a teπnination condition is reached. Then, error rates are evaluated as real numbers for selected or all bits of the code 301. Thus, for example bit error rates averaged over all bits can be deteπnined. The results 360 of the evaluation 330 can then be passed to an optimizer 370 to generate an optimized error-correcting code 390, as described in further detail below.
Projected Operands and Projective Operations
We first described the concepts of a "projected polynomial" and "projective multiplication" as applied to projected polynomials. A projected polynomial is a polynomial such that every teπn of the polynomial has no exponent of order greater than one. For example, m(xϊ,x2,x3) = l - x,x2 + x2χ3 -2χ,χ2x3 is a projected
polynomial in x, , x2 , and x3, but p(x],x23) = l -xlx2 + x2x3 -x2x2x3 is not, even
though it is a polynomial, because in the χ2χ2x3 term, the X] variable is squared. The addition of two projected polynomials necessarily results in another projected polynomial. However, the multiplication of two projected polynomials might not result in a projected polynomial. To take a simple example, x and χ,χ2
are both projected polynomials, but their product, using ordinary multiplication, is x2x2 , which is not a projected polynomial. Therefore, we use "projective
multiplication" when multiplying projected polynomials. In this operation, we multiply two projected polynomials in the ordinary way, and then reduce any resulting exponent greater than one down to one.
The operation of reducing exponents in a polynomial is called a "projection" because it reduces the space of polynomials down to the sub-space of projected polynomials. We denote projective multiplication of two projected polynomials mi and 1112 by m ® m2 , and call such a product a "projective multiplication."
Projected polynomials are a "field" under ordinary addition and projective multiplication. This means that we can use the commutative, associative, and distributive laws.
Projection Evaluation for Determining Probabilities of Errors
Our projective analysis is useful for determining the probability of events that are possible states of statistically independent binary random variables. For example, in the practical application of evaluating and optimizing error- coπecting codes as described herein.
Consider TV statistically independent binary random variables, Yl,Y2,...,YN , which
take on the values 0 or 1. As an example, in the application of the evaluation of eπor-correcting codes in the BEC as described by our method, the discrete values 1 or 0 indicate whether or not a received bit was erased.
We denote the probability that Y, = 1 by xt and the probability that Yt = 0 by 1 - x, .
There are 2N elements of the joint sample space for Y ,Y2,...,YN , each including a
setting for each random variable. We refer to the elements of the joint sample space as "fundamental events."
Because the binary random variables are statistically independent, the probability of a fundamental event is a multiplication of teπns such as x, , or (1 -x, ) . F°r
example, if N = 3 , one of the eight fundamental events is (7, = 1, Y2 = 0, 73 = 1) ,
and its probability is x,(l - x2)x3. The probability of a fundamental event always
is a projected polynomial as a function of the variable x, .
We denote the probability of a fundamental event F by P(F). It will be useful to note that for any fundamental event F,
P(F) ®P(F) = P(F), (25) and that for any two different fundamental events F and F 22 5
P(F1) ®P(F2) = 0 . (26)
For example, if N = 1 , P(Yl = ϊ) = x1 and P(Yλ = θ) = l -x, , while x, ® (l -x,) = 0 ,
Xj ® x, = x, , and (1 - x, ) ® (1 - x, ) = 1 - x, , which is consistent with rules (25) and
(26).
Following normal usage in probability theory, we define an "event" E to be a set of fundamental events. For example, when N = 3 , the event that γl = γ2 - Y3
consists of the set of two fundamental events (Y] = 0,Y2 = 0,Y3 = 0) and
(7j = 1, 72 = 1, 73 = 1) . The probability of an event P(E) is the sum of projected
polynomials, for example, P(Y] = Y2 = Y3 ) = x, x2 x3 + (1 - x, )(1 - x2 )(1 - χ3 ). In general,
if the event E includes the set of k fundamental events F F2,...,Fk, then
P(E) = ∑P(F,) .
1=1
The notation Ex n E2 denotes the event such that both the events E, and E2 occur.
The event Ex
Figure imgf000031_0001
is set of fundamental events that belong to the intersection of
the fundamental events in E, and in E2 . Subject to our assumed scenario that
fundamental events are the joint states of statistically independent binary variables, the probability of the event Ex nE2 can be obtained using our
projective multiplication:
P(EI E2) = E(E,) ®E(E2) . (27)
The correctness of rule (27) is demonstrated by expanding the events E and E2
into their fundamental events and noting that, according to rules (25) and (26), the only surviving cross-teπns are those for which the fundamental event inE, is
the same as the fundamental event in E2. Thus, the surviving terms precisely
give the sum of the probabilities of the fundamental events that belong to the intersection of the events E and E2.
A basic definition of probability theory is that two events E] and E2 are statistically independent if the probability that they both occur is the multiplication of their probabilities:
P(E] r.E2) = P(E,)P(E2) . (28)
Rule (28) is only valid for statistically independent events, but rule (27), using a projective multiplication instead of an ordinary multiplication, is valid for any two events, subject to our scenario that events are constructed from fundamental events that are joint states of TV statistically independent binary variables.
We denote by E, E2 the event where either of the events E, orE2 , or both, occur.
The event E, u£2 includes the fundamental events that belong to the union of the fundamental events in Ex and inE2. Subject again to our assumed scenario that
fundamental events are the joint states of statistically independent binary
variables, the probability of the event E,
Figure imgf000033_0001
is
P(E E2) = P(E1) + E(E2) -P(E1) ®E(E2). (29)
Rule (29) follows from rule (27) and the general laws of probability.
Together, rules (27) and (29) exactly determine the probabilities that two events will both occur, or that at least one of them will occur, even if the two events are statistically dependent. These rules are sufficient to exactly determine the perfoπnance of eπor-coπecting codes decoded by iterative decoding methods in the BΕC or BSC.
Projective Evaluation for Error-Correcting Codes Decoded by BP in the BEC
Our method 300 can be applied to any parity-check code, decoded by an iterative message-passing decoding method, with a finite number of discrete states for each message, in the memory-less BEC and memory-less BSC channels.
Because our method has an important practical application in belief propagation decoding of parity-check codes in the memory-less BEC, we now describe our method for that case. We index the TV variable nodes of a code by the letter , and label the erasure rate of the /th node by x,. Note that our method distinguishes the erasure rate at each node, even when, as is noπnally the case, the erasure rates are actually equal.
We consider the average probability of failure for BP decoding over many blocks. The iterations are indexed by an integer /. As in the density evolution method, we use the variable Pia(t) to represent the probability that the message 77,fl is an erasure at iteration t.
However, in the prior art density evolution method, the probability pia (t) is a real number, whereas in our projective method, Pia(t) is a projected polynomial as a function of the x,. Similarly, we use a projected polynomial qai(t) to represent the probability that the message mai is an erasure at iteration t, and the projected polynomial b, (t) to represent the probability that the node is decoded as an
erasure at iteration t.
In contrast with the prior art, our evaluation method 300 is exact for any parity- check code when the probabilities in the rules 321 are presented by projected polynomials, rather than real numbers, and when ordinary multiplications are replaced by projective multiplications. Thus, to determine the values of the projected polynomials pja (t) from the projected polynomials qai (t) , we use the
analog of density evolution rule (3):
P,a(t + l) = x, ® π^ (0 , (30) ieW(ι)\a where the multiplication is a projective multiplication, as defined above. Intuitively, rule (30) can be understood as follows. A variable node i sends an erasure message to a check node a when it is erased by the channel, which occurs with probability x, , and all incoming messages from neighboring check
nodes, other than a are also erasures. Thus, for an erasure to occur, a number of different events must all occur, and therefore rule (27) calls for a projective multiplication. Rule (30) is coπect even when the different incoming messages qai(t) are statistically dependent.
The other necessary rules are similarly transfonned from the prior art density evolution versions:
*-( =ι- no - ) , (3i) and ft, ( = *, ® aπeN(i) *-o. (32) where the multiplications are now understood to be projective multiplications.
Rule (31) is correct because a message from a check node a to a variable node i is an erasure when any of the incoming messages to node a is an erasure. Rule (32) is correct because a variable node is decoded incorrectly only if it is erased by the channel and all incoming messages are also erasures.
Projective Evaluation To evaluate 330 the projective rules 321, we apply rules (30, 31, and 32) and specify a set of input parameters 331. The set of parameters of interest can include the number of iterations L, the set of variable bits of interest B, or the set of specific messages of interest M in the decoding method 303.
We initialize the projected polynomials pιa(t = 1) = x, . We then iterate the rules
(31), (32), and then again (30), (31), and (32), in that order, to obtain new projected polynomials at every iteration.
To convert the projected polynomials into probabilities represented by real numbers, we assign the true value of the BEC erasure rate x for all the x, , and
evaluate the resulting polynomials. Note, in the prior art density evolution, the probabilities are calculated as real numbers at each iteration.
The evaluation 330 produce exact results 360 for the code 301. Thus, we can input a code 301, channel 302, and decoder 303, and obtain as output an exact prediction of the bit eπor rate at any selected nodes of the bipartite graph, i.e., bit positions.
Example Code
Figure 4 shows how the projective method 300 can correctly evaluate the perfoπnance of an eπor-correcting code that is evaluated incoπectly by the prior art density evolution method. We consider a very small example code defined by a parity-check matrix
Figure imgf000037_0001
This parity-check code has two variable bits 401, and two constraining check bits 402, one of which is clearly redundant. The redundant constraint has been introduced to induce a loop in the bipartite graph 400 for the code. Larger practical codes noπnally have loops even when there are no redundant constraints.
Exact Results for the Example Code
We exactly evaluate the perfoπnance of BP decoding by explicitly averaging over all possible received messages for this code in the BEC in order to compare the prior art density evolution method and the projective method according to the invention. Such an exact evaluation is of course only possible for small codes. However, the utility of our method also extends to larger error-coπecting codes for which an exact evolution by explicitly summing over all possible received messages is impractical.
If a "00" block is transmitted, four possible blocks can be received after transmission through the BEC: 00, 0?, ?0, and ??. The first bit is erased with probability x , and the second bit is erased with probability x2. These
probabilities of erasure may be, and normally are, equal, but we allow them to be different so that we can compare in detail with the results from the projective method.
The probability of receiving a "00" block is (l- x,)(l - x2) ; the probability of
receiving a "0?" block is (l -x,)x2; the probability of receiving a "?0" block is
*! (1 - x2) ; and the probability of receiving a "??" block is χ,x2.
If the "00" block is received, then no message sent by the BP decoding process can ever be an erasure message. If the "??" block is received, all messages sent by the BP decoding process are erasure messages, and the block cannot be decoded. If the "0?" block is received, the first variable node initially sends the two check nodes a '0' message, while the second variable node sends the two check nodes an erasure message. Then, at the first iteration, the check nodes send the first variable node an erasure message, and send the second variable node a '0' message. The block is successfully decoded after one iteration, because the first bit will receive a '0' message over the channel, and the second bit will receive a c0' message from both check nodes connected to the first bit. No further erasure messages are sent. The analysis for the "?0" block is similar, except that the role of the first and second bit is reversed.
Summing over the four possible received blocks, weighted by their probabilities, we see that the first bit initially sends an erasure to both check nodes with probability x, , but after one or more iterations, the probability that it sends an
erasure is x,x2. In terms of our previously established notation,
/>.«(> = !) = />»(' = l) = *ι, (34)
Pλa(t≥2) = Pu(t≥2) = xλx2. (35)
Summing over the four possible received blocks, we also get, in terms of our previous notation,
Figure imgf000039_0001
p2a(t≥2) = p2b(t≥2) = X]x2, (37)
3β,(' = l) = *«(/ = l) = *2, (38)
qa2{t^\) = qb2(t=\) = χ λ, (39)
qa(t≥2) = qm(t≥2) = qa2(t≥2) = qb2(t≥2) = xx2, (40)
b](t>l) = b2(t>l) = x1x2. (41)
Comparing Density Evolution and Projective Analysis Results for the Small Example Code
Now we confirm that our projective method reproduces the exact results for this example. We initialize:
Λ.(' = 1) = Λ» (' = !) = *., (42)
JP(' = l) = JP26(' = l) = *2, (43) which coπespond to rules (34) and (36). Then, using the projective rule (31), we obtain
*.ι (' = l) = *H (' = l) = *2, (44)
qβ2(t = ϊ) = qb2(t = ϊ) = χ l, (45)
in agreement with rules (38) and (39). Using projective rule (32), we obtain
b (t = l) = b2(t = ϊ) = xix2, (46)
in agreement with rules (41). Note that already at this point, our projective method coπects the erroneous result that would be given by the prior art density evolution method. The density evolution method incoπectly gives bx(t = 1) = x,x2
and b2(t = 1) = x, x2 , note the power of two in the polynomial However, with our
projective method, the offending power of two is reduced back down to one.
The reason that the density evolution method gives the wrong answer is that it does not recognize that the probability that check node a sends an erasure message to variable node 1 is statistically dependent on the probability that check node b sends an erasure message to variable 1. It incoπectly treats these events as if they were independent, which they are not.
Using projective rule (30), we obtain pλa (t = 2) = p2a (r = 2) = p h (t = 2) = p2b (t = 2) = x,x2 (47)
in agreement with rules (35) and (37). Upon further iterating our rules, we recover all the exact results listed above. In contrast, the results obtained by the density evolution method become progressively more inaccurate with each iteration, and in fact, for x, = x2 = x < 1 ,
the density evolution method predicts b! (t) = b2(0 = χ2M which approaches zero as
t goes to infinity, in contrast with the correct result b, (t) = b2(t) = x2. The coπect
result is simply a reflection of the fact that for decoding to fail, the "??" block must be received, which occurs with probability x2.
The density evolution method result becomes progressively more incorrect because it multiplies probabilities of erasures as if they were independent, and therefore with each iteration they incoπectly appear to become less probable. A similar phenomenon occurs in the density evolution analysis of regular Gallager codes.
Recall that the density evolution method incoπectly predicts that for finite block- lengths, regular Gallager codes decode all blocks if the erasure rate is below some threshold. This incorrect and over-optimistic prediction results from ignoring the loops in the graph for the code, and the resulting statistical dependencies between messages.
The reason for the success of our projective method is that it correctly tracks the precise fundamental events that cause each message to send an erasure, and can therefore correctly account for statistical dependencies between the messages.
For example, when qa - qbl = x,x2 , that means that the check nodes a and b only send variable node 1 an erasure upon the fundamental event that the "??" block was received.
The Stopping Set Representation
When the projective method is applied to BP decoding on the BEC, the projected polynomials can be stored in a more efficient form, which we call the "stopping set representation," and a coπesponding change is made in the projective operations.
A general projected polynomial that arises in the projective method for BP decoding on the BEC can be expanded into a sum of terms, each of which is product of x, , with integer coefficients.
For example, a typical projected polynomial is m(χ , χ2 , 3) = x,χ3 + χ2 - XjX.^ ,
which expands into three terms. We say that a given term depends on a set of nodes if it contains x. factors corresponding to those nodes. For example, the
term J , 3 depends on nodes 1 and 3.
A "minimal term" in a projected polynomial is one that depends on a set of nodes that does not have any sub-set of nodes such that the sub-set of nodes is the set of nodes depended upon by another term in the projected polynomial. For example, in the projected polynomial m(x ,x2,x3) = xλx3 + χ2 -XjX2x3 , the teπns x,x3 and χ2 are minimal teπns, but the term χ χ2χ3 is not a minimal teπn,
because it depends on a set of nodes that has sub-sets (either the set of node 2, or the set of nodes 1 and 3) that are the sets of nodes depended upon by other terms in the projected polynomial. The nodes that a minimal term depends on are called a "stopping set" for the projected polynomial.
For all the projected polynomials that arise in the projective method for evaluating BP decoding in the BEC, the full projected polynomial can be reconstructed from a list of the minimal teπns.
To reconstruct the full projected polynomial from a list of the minimal terms, we sum each minimal term in the list, then subtract the projective multiplication of every pair of minimal terms, then add the projective multiplication of every triplet of minimal teπns, then subtract the projective multiplication of every quadruplet of minimal tenns, etc., until we have exhausted all possible combinations of minimal terms in the list. For example, if we have a minimal term list {x]x2,x2x3,x4} , then it should be expanded to the full projected
' ' AV A , Λ/-y . Λ) . -*4 / — Λi "T* ΛnΛ'j ι" ™ * Λ"j ~~ Λi Λ "~ An Λ •
The representation of a projected polynomial in terms of its list of minimal tenns, which we call the "stopping set" representation, is clearly much more compact than the ordinary representation. The representation is also a good intuitive one, for stopping sets represent minimal patterns of node erasures that cause a decoding failure for BP decoding on the BEC.
When projected polynomials are represented by their stopping set representations, the operations involved by the projective update rules are simplified. In particular, the probability of events that depend on the union of other events can be determined in the stopping set representation by directly taking a union of minimal term lists representing the other events, and then deleting from the resulting list any terms that are no longer minimal terms. Thus, for rule (31), we determine the expression qaj (t) = 1 - Y[(l- pja (0) by taking a
union of all the p .a (t) minimal teπn lists, and then deleting from the list any
teπns that are no longer minimal terms. For example, if q = l- (l -Pl)(l- p2) , (48)
and
Figure imgf000044_0001
while p2 = {xlx2x3,x3x } , (50)
then we find that q = X]X2,x2x3,x3x4) , -
without needing to bother to expand out the projected polynomials to their full representations, and then apply projective multiplication. We often need to take projective multiplications of two projected polynomials in their stopping set representations, as in rules (30) and (32). We simplify the operation as follows.
We fonn a list of the projective multiplications of all pairs of minimal terms from the two projected polynomials, and then delete from the list any terms that are no longer minimal teπns. For example, to compute p ®p2 , with p and p
as given by rules (49) and (50), we generate creates the list of four terms
λχ ®x1x2x3,x]x2 ®x3x4, x2x3 ® x,x2x3, x2x3 ® x3x4} which simplifies to the list
{x1x2x3,x1x2x3x4,x1x2x3, x2x3x4} , and then, by deleting out non-minimal teπns, we
obtain
Pι ® P2 = {ΛΓ1 2X3 , 2 3X4} . (52)
Obtaining Lower and Upper Bounds Efficiently
The projective method, as we have described it so far is an exact evaluation method. Unfortunately, the amount of memory required to store all the projected polynomials grows rapidly with the block-length of the code being analyzed. Beyond a certain block-length, which depends on the amount of memory available, it becomes unpractical to store the full projected polynomials. Thus, it is useful to approximate the projected polynomials in a way that requires less memory. In general, a variety of such approximations can be made. One might also want to obtain upper and lower bounds on the projected polynomials while reducing the number of teπns in each projected polynomial. By upper and lower bounds, we mean a maximum and minimum of a projected polynomial. Lower bounds can be obtained by removing teπns that are guaranteed to be positive. Upper bounds can be obtained by replacing pairs of teπns with single terms that are guaranteed to have greater magnitude than the pair.
In order to obtain consistent lower (and respectively upper) bounds, one needs to guarantee that when one starts with a lower (and respectively upper) bound for all the projected polynomials, one still has a lower (and respectively upper) bound on the result after applying a projective analysis rule. For some cases, like the case of BP decoding on the BEC which we discuss below, that guarantee is straightforward, but otherwise, one might need to modify the projective analysis rules to insure that lower and upper bounds are maintained consistently.
As an example, we obtain upper and lower bounds for the case of the projective method for BP decoding on the BEC. In that case, the stopping set representation of projected polynomials can be exploited. To obtain a lower bound on the projected polynomials, we remove some of the minimal teπns in its stopping set representation. For example, if m(xx,x2,x3,x ) = {x,x2,x2x3,XjX3x4} , then we lower-
bound the projected polynomial by removing any of the three terms. Recall that all the probabilities x,. ultimately have values that are less than or
equal to one. Noπnally, one would therefore choose to remove the terms that depend on a larger number of nodes, for these must be of smaller magnitude and the resulting approximation will be better. Thus, we noπnally choose to remove the x,x3x4 teπn from the projected polynomial given above.
Many other possible systematic lower bound approximation schemes are possible. For example, we can limit each projected polynomial to a fixed number of tenns, keeping the tenns that have the fewest nodes. Alternatively, we keep all terms that have fewer than some maximum number of nodes. A third possibility is to relate the maximum number of permissible nodes in a term to the number of nodes in the term with the fewest nodes. A practitioner skilled in the art can devise other possibilities.
To obtain an upper bound on a projected polynomial, using fewer terms than in the original projected polynomial, we replace pairs of minimal tenns in the stopping set representation, with another term that depends only on the nodes in the intersection of the two stopping sets. For example, to obtain an upper bound on the projected polynomial ?77(x],x2,x3,x4) = {x1x2,x2x3,x]x3χ4} , we remove the two
terms χλχ2 and χ2χ3 , and replace these terms with the teπn x2 , to obtain the
upper-bound projected polynomial {x2,x,x3x4} . The upper bound procedure
works in such cases because we know that for 0 < x,. < 1.
Again, many systematic upper bound approximation schemes are possible. For example, we can replace pairs of terms with single terms until no more than some maximum number of terms is left. Alternatively, we can stop replacing pairs of terms when one there is no replacement that does not reduce the minimal number of nodes that any tenns has. Again, many other specific schemes are possible.
If we use lower (and respectively upper) bounds exploiting the stopping set representation for every projected polynomial obtained from every computation in the projective analysis method applied to BP decoding for the BEC, then all resulting probabilities of interest as determined by using the rules are also guaranteed to be lower (and respectively upper) bounds.
This can be understood as follows. The lower (and respectively upper) bounds, using the stopping set representation, will always approximate the probability of an event by the probability of another event with strictly fewer ( and respectively more) fundamental events. The projective analysis of BP decoding on the BEC only involves probabilities of unions or intersections of events. The union or intersection of events containing fewer (and respectively more) fundamental events must also contain fewer (and respectively more) fundamental events than the union or intersection of the original events. Therefore, it too will be a lower (and respectively upper) bound. Optimizing Better Error-Correcting Codes
Given that the density evolution method has been used in the past as a guide to generate the best-known practical eπor-correcting codes, we can generate even better codes with our projective method. With the projective method according to our invention, we can, given a channel and decoding method, input an eπor- coπecting code defined by an arbitrary generalized parity check matrix, and obtain as output a prediction of the bit eπor rate at each node.
We can use this output in an objective function for a guided search through the space of possible improved codes. For example, we can try to find a TV= 100 blocklength, transmission rate 1/2 code with no hidden states that achieves a bit eπor rate of less than 10" at the highest possible erasure rate when decoded by BP for the BEC channel. We do this by iteratively evaluating codes of the coπect blocklength and rate, using our projective method, and using any known search techniques, e.g., greedy descent, simulated annealing, genetic process, etc. to search through the space of valid parity check matrices.
Because we directly focus on the coπect measure of merit, i.e., the bit eπor rate itself, rather than the threshold in the infinite block-length limit, the search according to the invention improves on the results obtained using the prior art density evolution process. We can guide the search because we have infoπnation about the bit error rate at every node. For example, it might make sense to "strengthen" a weak variable node with a high bit error rate by adding additional parity check nodes, or we can "weaken" strong nodes with a low bit eπor rate by turning the weak nodes into hidden nodes, thus increasing the transmission rate.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims

1. A method for evaluating and optimizing an error-correcting code to be transmitted through a noisy channel and to be decoded by an iterative message- passing decoder, comprising: representing the error-coπecting code by a parity check matrix; modeling the parity check matrix as a bipartite graph having a plurality of variable nodes and check nodes; providing a set of message passing rules for the decoder; analyzing the decoder to obtain a set of density evolution rules including operators and operands; transfoπning the operators to projective operators and the operands to projected operands to generate a set of projective message passing rules; and iteratively applying the projective message passing rules to the error- correcting code modeled by the bipartite graph until a termination condition is reached; and deteπnining eπor rates of selected bits of the error-coπecting code by evaluating the coπesponding operands.
2. The method of claim 1 further comprising: passing the error rates to an optimizer to optimize the eπor-coπecting code.
3. The method of claim 1 wherein the bipartite graph includes at least one loop.
4. The method of claim 1 wherein the projected operand is in the form of a projected polynomial.
5. The method of claim 4 wherein each term of a result after a projective operation has all exponents not greater than one.
6. The method of claim 1 wherein the channel is a binary symmetric channel.
7. The method of claim 1 wherein the channel is a binary erasure channel.
8. The method of claim 1 further comprising: deteπnining lower bounds for the projective polynomials.
9. The method of claim 1 further comprising: deteπnining upper bounds for the projective polynomials.
10. The method of claim 1 further comprising: representing a particular projected polynomial as a list of minimal terms, wherein each minimal term of the projected polynomial depends on a set of variable nodes of the bipartite graph that does not have any sub-set of variable nodes such that the sub-set of variable nodes is the set of variable nodes depended upon by another term in the projected polynomial.
11. The method of claim 10 further comprising: deteπnining a lower bound for the projective polynomials.
12. The method of claim 11 further comprising: approximating a particular projected polynomial by eliminating selected minimal terms in the list of minimal terms.
13. The method of claim 10 further comprising: detennining an upper bound for the projective polynomials.
14. The method of claim 13 further comprising: approximating a particular projected polynomial by replacing pairs of minimal terms in the list of minimal terms by another minimal tenn of greater magnitude.
15. The method of claim 1 wherein the teπnination condition is a predetennined number of iterations. e method of claim 1 further comprising: determining probabilities that messages will be in certain states.
PCT/JP2002/010169 2001-10-01 2002-09-30 Evaluating and optimizing error-correcting codes using projective analysis WO2003032498A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2003535338A JP4031432B2 (en) 2001-10-01 2002-09-30 Evaluation and optimization of error-correcting codes using projection analysis.
EP02775267A EP1433261A1 (en) 2001-10-01 2002-09-30 Evaluating and optimizing error-correcting codes using projective analysis

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/968,182 US6842872B2 (en) 2001-10-01 2001-10-01 Evaluating and optimizing error-correcting codes using projective analysis
US09/968,182 2001-10-01

Publications (1)

Publication Number Publication Date
WO2003032498A1 true WO2003032498A1 (en) 2003-04-17

Family

ID=25513867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2002/010169 WO2003032498A1 (en) 2001-10-01 2002-09-30 Evaluating and optimizing error-correcting codes using projective analysis

Country Status (5)

Country Link
US (1) US6842872B2 (en)
EP (1) EP1433261A1 (en)
JP (1) JP4031432B2 (en)
CN (1) CN1312846C (en)
WO (1) WO2003032498A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111164914A (en) * 2017-08-28 2020-05-15 弗劳恩霍夫应用研究促进协会 Hybrid decoder for slotted ALOHA coding

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7673223B2 (en) * 2001-06-15 2010-03-02 Qualcomm Incorporated Node processors for use in parity check decoders
US6633856B2 (en) * 2001-06-15 2003-10-14 Flarion Technologies, Inc. Methods and apparatus for decoding LDPC codes
US6938196B2 (en) * 2001-06-15 2005-08-30 Flarion Technologies, Inc. Node processors for use in parity check decoders
AU2003249708A1 (en) * 2002-07-03 2004-01-23 Hughes Electronics Corporation Method and system for memory management in low density parity check (ldpc) decoders
US7577207B2 (en) 2002-07-03 2009-08-18 Dtvg Licensing, Inc. Bit labeling for amplitude phase shift constellation used with low density parity check (LDPC) codes
US7020829B2 (en) * 2002-07-03 2006-03-28 Hughes Electronics Corporation Method and system for decoding low density parity check (LDPC) codes
US20040019845A1 (en) * 2002-07-26 2004-01-29 Hughes Electronics Method and system for generating low density parity check codes
US7864869B2 (en) * 2002-07-26 2011-01-04 Dtvg Licensing, Inc. Satellite communication system utilizing low density parity check codes
KR20050039843A (en) * 2002-08-14 2005-04-29 텔레폰악티에볼라겟엘엠에릭슨(펍) Receiver and method for decoding of truncated data
US7178080B2 (en) * 2002-08-15 2007-02-13 Texas Instruments Incorporated Hardware-efficient low density parity check code for digital communications
US7162684B2 (en) 2003-01-27 2007-01-09 Texas Instruments Incorporated Efficient encoder for low-density-parity-check codes
US6957375B2 (en) * 2003-02-26 2005-10-18 Flarion Technologies, Inc. Method and apparatus for performing low-density parity-check (LDPC) code operations using a multi-level permutation
US7139959B2 (en) * 2003-03-24 2006-11-21 Texas Instruments Incorporated Layered low density parity check decoding for digital communications
US6771197B1 (en) * 2003-09-26 2004-08-03 Mitsubishi Electric Research Laboratories, Inc. Quantizing signals using sparse generator factor graph codes
CN1954501B (en) * 2003-10-06 2010-06-16 数字方敦股份有限公司 Method for receving data transmitted from a source by communication channel
US7191376B2 (en) * 2003-12-04 2007-03-13 Mitsubishi Electric Research Laboratories, Inc. Decoding Reed-Solomon codes and related codes represented by graphs
US7200603B1 (en) * 2004-01-08 2007-04-03 Network Appliance, Inc. In a data storage server, for each subsets which does not contain compressed data after the compression, a predetermined value is stored in the corresponding entry of the corresponding compression group to indicate that corresponding data is compressed
US20050193320A1 (en) * 2004-02-09 2005-09-01 President And Fellows Of Harvard College Methods and apparatus for improving performance of information coding schemes
KR100762619B1 (en) * 2004-05-21 2007-10-01 삼성전자주식회사 Apparatus and Method for decoding symbol with Low Density Parity Check Code
US7127659B2 (en) * 2004-08-02 2006-10-24 Qualcomm Incorporated Memory efficient LDPC decoding methods and apparatus
EP1626521B1 (en) * 2004-08-10 2012-01-25 Rohde & Schwarz GmbH & Co. KG Method to perform a statistical test in which the experiment is multinomial
US7506238B2 (en) * 2004-08-13 2009-03-17 Texas Instruments Incorporated Simplified LDPC encoding for digital communications
KR100684168B1 (en) * 2004-12-09 2007-02-20 한국전자통신연구원 Design of Rate-compatible LDPC Codes Using Optimal Extending
US7581159B2 (en) * 2004-11-23 2009-08-25 Texas Instruments Incorporated Simplified decoding using structured and punctured LDPC codes
US20060117133A1 (en) * 2004-11-30 2006-06-01 Crowdsystems Corp Processing system
US7562282B1 (en) 2005-05-23 2009-07-14 Western Digital Technologies, Inc. Disk drive employing error threshold counters to generate an ECC error distribution
US7499490B2 (en) * 2005-06-24 2009-03-03 California Institute Of Technology Encoders for block-circulant LDPC codes
US7783961B2 (en) * 2005-07-01 2010-08-24 Nec Laboratories America, Inc. Rate-compatible low density parity check coding for hybrid ARQ
US7707479B2 (en) 2005-12-13 2010-04-27 Samsung Electronics Co., Ltd. Method of generating structured irregular low density parity checkcodes for wireless systems
US8117523B2 (en) 2007-05-23 2012-02-14 California Institute Of Technology Rate-compatible protograph LDPC code families with linear minimum distance
JP4982898B2 (en) * 2007-11-15 2012-07-25 独立行政法人産業技術総合研究所 Method and system for generating low density parity check code
WO2009023298A1 (en) * 2008-04-10 2009-02-19 Phybit Pte. Ltd. Method and system for factor graph soft-decision decoding of error correcting codes
US8560917B2 (en) * 2009-01-27 2013-10-15 International Business Machines Corporation Systems and methods for efficient low density parity check (LDPC) decoding
US8510639B2 (en) * 2010-07-01 2013-08-13 Densbits Technologies Ltd. System and method for multi-dimensional encoding and decoding
US9294132B1 (en) * 2011-11-21 2016-03-22 Proton Digital Systems, Inc. Dual-stage data decoding for non-volatile memories
US8996971B2 (en) * 2012-09-04 2015-03-31 Lsi Corporation LDPC decoder trapping set identification
US9553608B2 (en) * 2013-12-20 2017-01-24 Sandisk Technologies Llc Data storage device decoder and method of operation
US10073738B2 (en) 2015-08-14 2018-09-11 Samsung Electronics Co., Ltd. XF erasure code for distributed storage systems
DE102018202095A1 (en) * 2018-02-12 2019-08-14 Robert Bosch Gmbh Method and apparatus for checking neuron function in a neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4295218A (en) * 1979-06-25 1981-10-13 Regents Of The University Of California Error-correcting coding system
US6195777B1 (en) 1997-11-06 2001-02-27 Compaq Computer Corporation Loss resilient code with double heavy tailed series of redundant layers
US6073250A (en) 1997-11-06 2000-06-06 Luby; Michael G. Loss resilient decoding technique
US6651213B2 (en) * 2001-03-19 2003-11-18 International Business Machines Corporation Programmable multi-level track layout method and system for optimizing ECC redundancy in data storage devices

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANGYAN DI ET AL: "Finite-length analysis of low-density parity-check codes on the binary erasure channel", IEEE TRANSACTIONS ON INFORMATION THEORY, JUNE 2002, IEEE, USA, vol. 48, no. 6, pages 1570 - 1579, XP002228427, ISSN: 0018-9448 *
N. WIBERG: "Codes and Decoding on General Graphs", PHD THESIS, 1996, Linköping, pages 1 - 94, XP002228425, Retrieved from the Internet <URL:http://www.essrl.wustl.edu/~jao/itrg/wiberg.pdf> [retrieved on 20030122] *
RICHARDSON T J ET AL: "The capacity of low-density parity-check codes under message-passing decoding", IEEE TRANSACTIONS ON INFORMATION THEORY, FEB. 2001, IEEE, USA, vol. 47, no. 2, pages 599 - 618, XP002228426, ISSN: 0018-9448 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111164914A (en) * 2017-08-28 2020-05-15 弗劳恩霍夫应用研究促进协会 Hybrid decoder for slotted ALOHA coding
CN111164914B (en) * 2017-08-28 2023-03-31 弗劳恩霍夫应用研究促进协会 Hybrid decoder for slotted ALOHA coding

Also Published As

Publication number Publication date
US20030065989A1 (en) 2003-04-03
JP4031432B2 (en) 2008-01-09
US6842872B2 (en) 2005-01-11
CN1476674A (en) 2004-02-18
CN1312846C (en) 2007-04-25
JP2005505981A (en) 2005-02-24
EP1433261A1 (en) 2004-06-30

Similar Documents

Publication Publication Date Title
US6842872B2 (en) Evaluating and optimizing error-correcting codes using projective analysis
EP1258999B1 (en) Evaluating and optimizing error-correcting codes using a renormalization group transformation
Pishro-Nik et al. Results on punctured low-density parity-check codes and improved iterative decoding techniques
US7103825B2 (en) Decoding error-correcting codes based on finite geometries
EP1460766A1 (en) Ldpc code inspection matrix generation method
US20100192040A1 (en) Multi-Stage Decoder for Error-Correcting Codes
CN100589357C (en) LDPC code vector decode translator and method based on unit array and its circulation shift array
Hanna et al. Guess & check codes for deletions, insertions, and synchronization
US7191376B2 (en) Decoding Reed-Solomon codes and related codes represented by graphs
EP1526647B1 (en) Generation of a check matrix for irregular low-density parity-check (LDPC) codes
US8806289B1 (en) Decoder and decoding method for a communication system
Chilappagari et al. Instanton-based techniques for analysis and reduction of error floors of LDPC codes
Yang et al. Can a noisy encoder be used to communicate reliably?
Lentmaier et al. Exact erasure channel density evolution for protograph-based generalized LDPC codes
Islam LDPC Codes Incorporating Source, Noise, and Channel Memory
Liu et al. On the performance of direct shaping codes
Ashikhmin et al. Decoding of expander codes at rates close to capacity
Blake Essays on Coding Theory
Ellero et al. Computational complexity analysis of hamming codes polynomial co-decoding
Lulu Construction Of Ldpc Codes Using Randomly Permutated Copies Of Parity Check Matrix
He et al. Outer Channel of DNA-Based Data Storage: Capacity and Efficient Coding Schemes
Seong-Joon Double-Masked Error Correction Code Transformer and Coding Schemes for High-Density DNA Storage with Low Error Rates
Yedidia et al. Renormalization group approach to error-correcting codes
Han Ldpc coding for magnetic storage: low floor decoding algorithms, system design, and performance analysis
Chandar Iterative algorithms for lossy source coding

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FR GB GR IE IT LU MC NL PT SE SK TR

WWE Wipo information: entry into national phase

Ref document number: 2003535338

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 028031032

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002775267

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002775267

Country of ref document: EP