US8745468B1 - Iterative decoding systems using noise-biasing - Google Patents
Iterative decoding systems using noise-biasing Download PDFInfo
- Publication number
- US8745468B1 US8745468B1 US12/357,200 US35720009A US8745468B1 US 8745468 B1 US8745468 B1 US 8745468B1 US 35720009 A US35720009 A US 35720009A US 8745468 B1 US8745468 B1 US 8745468B1
- Authority
- US
- United States
- Prior art keywords
- samples
- noise
- iterative
- channel
- iterative decoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1004—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1105—Decoding
- H03M13/1111—Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2948—Iterative decoding
- H03M13/2951—Iterative decoding using iteration stopping criteria
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3723—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using means or methods for the initialisation of the decoder
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/6331—Error control coding in combination with equalisation
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/65—Purpose and implementation aspects
- H03M13/6508—Flexibility, adaptability, parametrability and configurability of the implementation
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/65—Purpose and implementation aspects
- H03M13/6577—Representation or format of variables, register sizes or word-lengths and quantization
- H03M13/658—Scaling by multiplication or division
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1105—Decoding
- H03M13/1111—Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
- H03M13/1117—Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using approximations for check node processing, e.g. an outgoing message is depending on the signs and the minimum over the magnitudes of all incoming messages according to the min-sum rule
Definitions
- Error correcting codes may be employed by communication and/or data storage systems to detect and correct errors in received or recovered data. Error correcting codes may be implemented by using an encoding process at a transmitter, in which redundancy is added to data, and an iterative decoding process at a receiver, in which the added redundancy is exploited through a series of iterative decoding steps, to correct errors.
- the effectiveness of an error correcting code may be characterized by the number of errors per encoded data block that the error correcting code is capable of correcting. For example, an error correcting code may be able to correct up to t symbol errors. Error correcting codes are often able to correct a large number of the errors that may be present in a received data, and this may improve end-to-end reliability.
- Error correcting codes may be decoded using an iterative message passing process implemented by an iterative decoder. For example, a min-sum or sum-product decoding algorithm may be used to decode an LDPC code. Such algorithms may decode received samples using a process in which each iteration includes two update steps. In the first update step, messages may be passed from check nodes to bit nodes, and message updates may be performed by the bit nodes. In the second update step, the updated messages may be passed from bit nodes back to the check nodes, and message updates may be performed by the check nodes.
- first update step messages may be passed from check nodes to bit nodes, and message updates may be performed by the bit nodes.
- the updated messages may be passed from bit nodes back to the check nodes, and message updates may be performed by the check nodes.
- the iterative message passing algorithm used by the iterative decoder may occasionally fail to converge or may converge to an incorrect state, leading to bit-errors or sector-errors that generally degrade application performance. Such errors may occur when the iterative message passing algorithm incorrectly converges to an errant vector (sometimes known as a near-codeword error) and/or when the algorithms fails to converge to any stable decoding state. Often, iterative decoding algorithms suffer from an error floor, i.e., a fundamental system limit on the error-rate that cannot be improved simply by increasing the operational signal-to-noise ratio.
- Iterative decoding systems, techniques, and processes are disclosed for lowering the error-floors in iteratively decoded communications systems and/or receivers.
- systems, techniques, and processes are disclosed in which an iterative decoding algorithm is prevented from converging to a near-codeword and/or is driven to converge to a correct codeword (rather than oscillate through unstable decoding states).
- the lowered error-floor provided by such an iterative decoding architecture may lead to improved application performance, fewer interruptions in service, and/or larger data transmission rates.
- One aspect of the invention relates to a method for decoding a codeword using an iterative decoder.
- communications hardware receives channel samples, and processes the channel samples using an iterative decoder. The output of the iterative decoder may then be used to determine if a decoding failed has occurred. If it is determined that a decoding failure has occurred, channel samples may be combined with a set of noise samples to obtain biased noise samples. The iterative decoder may then be re-run using the biased channel samples. This iterative decoding process may continue until either the iterative decoder converges to a correct codeword or until the decoding process has run for a predetermined number of times.
- noise samples may be generated by configuring values of a noise scaling factor parameter and a noise offset parameter, and these values may be configured based on the received channel samples.
- Modified noise samples may be generated by permuting the noise samples, and biased channel samples may be generated by combining the received channel samples with the modified noise samples. In one embodiment, the combining may be done by adding the modified noise samples to the received channel samples. The biased channel samples may then be provided to an iterative decoder for further processing.
- receiver circuitry may be used to receive channel samples corresponding to a codeword transmitted over a channel, and detection circuitry may be used to generate a set of messages corresponding to the channel samples. Decoding circuitry may then decode the set of messages using an iterative decoder. In one embodiment, if a decoding failure (or non-convergence of the iterative decoder) is detected, noise-biasing circuitry may generate noise samples that can be used for noise biasing.
- noise-biasing circuitry may generate a scaling factor parameter and an offset parameter based on an output of the iterative decoder, and generate the set of noise samples based on the scaling factor parameter and the offset parameter. Noise-biasing circuitry may then combine the set of noise samples with the received channel samples to produce modified channel samples, and these modified channel samples may then be provided to the iterative decoder for further processing.
- FIGS. 1A and 1B show illustrative communications systems employing iterative decoding.
- FIG. 2 shows an illustrative example of the properties of a parity check matrix in accordance with some embodiments.
- FIG. 3 shows an illustrative example of a parity check matrix in accordance with some embodiments.
- FIGS. 4A and 4B show an illustrative parity check matrix of an LDPC code and a graphical representation of a decoding process that may be used by an iterative decoder to produce a message estimate, respectively.
- FIG. 5 shows structures and processes used by an iterative decoder in some embodiments to produce a message estimate from a received samples.
- FIG. 6 depicts an illustrative error-rate curve for a code, such as an LDPC code, decoded using an iterative decoder.
- FIG. 7 illustrates noise-biasing in the systems and processes of an iterative decoder in accordance with some embodiments.
- FIG. 8 illustrates interleaving systems and processes used to operate a permutation control according to some embodiments.
- FIG. 9 illustrates a hardware and firmware interface that may be used by a scaling factor module to determine a scaling factor in accordance with some embodiments.
- FIG. 10 shows an illustrative process that may be used to determine a scaling factor in an iterative decoder in accordance with some embodiments.
- FIG. 11 shows another illustrative process that may be used to determine a scaling factor in an iterative decoder in accordance with some embodiments.
- FIG. 1A shows an illustrative communications system 100 employing iterative decoding in accordance with some embodiments.
- Communications system 100 may be particularly useful for decoding received information over memoryless channels, while communications system 150 ( FIG. 1B ) may be particularly useful for decoding received information over channels containing memory.
- Communications system 100 may be used to transmit information from transmitting user or application 102 to receiving user or application 118 .
- Transmitting user or application 102 may be any object or entity that produces information.
- transmitting user or application 102 may correspond to a software program in a computer system or to a component of a wireless communications transmitter in a radio system.
- Transmitting user or application 102 may produce information in the form of a data stream, and such a data stream may be represented by a sequence of symbol values that have been pre-processed by, for example, a source encoder (not shown in FIG. 1A ).
- the information produced by transmitting user or application 102 may correspond to voice information, video information, financial information, or any other type of information that may be represented in digital or analog form, and the data stream produced by transmitting user or application 102 may be a digital data stream.
- Transmitting user or application 102 may segment or otherwise divide the data stream into blocks of a certain fixed length, and each fixed block may be referred to as message.
- the message may be of length k, meaning that the message includes k symbols, where each symbol may be binary data, ternary data, quaternary data, any other suitable type of data, or any suitable combination thereof.
- Iterative code encoder 104 may be used to encode the message to produce a codeword. Iterative code encoder 104 may be a LDPC encoder, a turbo encoder, or any other suitable encoder.
- the codeword may be of length n, where n ⁇ k.
- Iterative code encoder 104 may use a generator matrix to produce the codeword. For example, iterative code encoder 104 may perform one or more operations, including one or more matrix operations, to convert the message into the codeword.
- the generator matrix may be referred to as G, and the codeword may be referred to as c .
- the codeword may be modulated or otherwise transformed into a waveform suitable for transmission and/or storage on channel 106 .
- the waveform may correspond to an analog Binary Phase-Shift Keying (BPSK) signal, analog Phase-Shift Keying (PSK) signal, analog Frequency-Shift Keying (FSK) signal, analog Quadrature Amplitude Modulation (QAM) signal, or any other suitable analog or digital signal.
- BPSK Binary Phase-Shift Keying
- PSK Phase-Shift Keying
- FSK Frequency-Shift Keying
- QAM Quadrature Amplitude Modulation
- Channel 106 may refer to the physical medium through which the transmitted waveform passes or is stored on before being recovered at demodulator 108 .
- channel 106 may be a storage channel that represents a magnetic recording medium in a computer system environment or a communications channel that represents the wireless propagation environment in a wireless communications environment.
- Various characteristics of channel 106 may corrupt data that is communicated or stored thereon.
- channel 106 may be a non-ideal memoryless channel or a channel with memory.
- the output of channel 106 may be demodulated and processed by demodulator 108 to produce received samples 110 .
- demodulator 108 may use frequency filters, multiplication and integration by periodic functions, and/or any other suitable demodulation technique to demodulate and/or process the output of channel 106 .
- Received samples 110 may contain information related to the codeword, and/or received samples 110 may correspond to a corrupted or otherwise altered version of the codeword or the information originally output by iterative code encoder 104 .
- received samples 110 may contain a preliminary estimate or noisy version of the codeword produced by iterative code encoder 104 , a probability distribution vector of possible values of the codeword produced by iterative code encoder 104 , or to combinations of these as well other items.
- Channel log-likelihood ratio (LLR) generator 112 may be used to calculate channel LLRs 114 based on received samples 110 .
- Channel LLRs 114 may correspond to initial LLR values determined based on received samples 110 , a statistical description of channel 106 , and/or the probability of the output of iterative code encoder 104 .
- Channel LLRs 114 may be formed based on the received samples, and iterations may be performed within iterative code decoder 116 . Iterative code decoder 116 may be used to iteratively correct and/or detect errors that may be present in received samples 110 , for example, due to transmission through channel 106 .
- iterative code decoder 116 may decode channel LLRs 114 using parity a check matrix to produce an estimate.
- the message estimate may be referred to as ⁇ circumflex over (m) ⁇
- the parity check matrix may be referred to as H.
- Iterative code decoder 116 may use any of a number of possible decoding algorithms to produce the message estimate. For example, iterative code decoder 116 may use decoding algorithms known as belief propagation algorithms.
- FIG. 1B shows another illustrative communications system employing iterative decoding in accordance with some embodiments.
- Communications system 150 may correspond to a more detailed embodiment of communications system 100 or communications system 150 may correspond to a different embodiment.
- Communications system 150 may be particularly useful for decoding codewords or received samples, such as received samples 110 ( FIG. 1A ), when the channel contains memory (e.g., when it is an inter-symbol interference (ISI) channel).
- ISI inter-symbol interference
- Communications system 150 may correspond to a turbo equalization system, where decoding is done by iterating between a soft channel decoder (e.g, a SOVA or BCJR decoder) and soft code decoder (e.g., a LDPC or Turbo code decoder), as shown in FIG. 1B .
- a soft channel decoder e.g, a SOVA or BCJR decoder
- soft code decoder e.g., a LDPC or Turbo
- Transmitting user or application 150 , iterative code encoder 152 , and/or channel 154 may operate and include features similar or identical to those of transmitting user or application 102 , iterative code encoder 104 , and/or channel 106 , of FIG. 1A , respectively.
- the output of channel 154 may be filtered by analog front-end 156 , and then digitized by analog-to-digital converter 158 .
- Samples from analog-to-digital converter 158 may be equalized by FIR filter 160 and passed to an iterative decoder, e.g., consisting of BCJR/SOVA channel detector 162 and LDPC decoder 164 .
- BCJR/SOVA channel detector 162 may obtain channel LLRs based on received samples, and pass channel LLRs to LDPC decoder 164 .
- LDPC decoder 164 may, in turn, update LLR estimates using code constraints signified by the parity check matrix, and provide new LLR values back to the BCJR/SOVA channel detector 162 . This iterative process may continue until a certain stopping criterion has been met.
- communications system 150 may iteratively decode the output of FIR filter 160 using BCJR/SOVA channel detector 162 and LDPC decoder 164 .
- the iterative decoder shown in FIG. 1B passes the messages between the channel detector and code decoder (known as a outer iteration).
- each outer (global) iteration includes one channel decoding and some number of inner iterations (e.g. five iterations) of the LDPC code decoder process.
- communications system 150 may include noise-biasing as shown in one embodiment in communications system 500 ( FIG. 5 ).
- Communications system 500 may advantageously include noise-biasing 510 ( FIG. 5 ) to decrease the decoding error-rate of communications system 150 (or similarly, of communications system 100 ).
- Noise-biasing 510 may be used in a communications systems such as communications system 100 or communications system 150 whether or not the channel (e.g., channel 106 or 154 ) has memory. In either case, noise-biasing 510 ( FIG. 5 ) may prevent the iterative decoding process shown in FIG. 1A , FIG. 1B , or FIG.
- FIG. 7 A further embodiment of the iterative decoder shown in communications system 500 is shown in FIG. 7 .
- Iterative decoder 700 may permute noise samples at regular event intervals using permutation control 790 ( FIG. 7 ).
- Iterative decoder 700 may, for example, use one of the processes illustrated in FIG. 10 or FIG. 11 to generate values for the noise samples, and may use an interleaving scheme identical or similar to the one illustrated in FIG. 8 to control permutation control 790 ( FIG. 7 ).
- Noise-biasing may be performed by adding random noise to the set of received channel samples, for example, using a random noise source.
- channel noise may be used.
- iterative detector output (which may be almost free of error) may be taken, the ideal channel output corresponding to this sequence calculated, and then the resultant values may be subtracted from the received channel samples to create a noise-like sequence.
- This noise-like sequence may then interleaved and scaled to create suitable samples for noise-biasing. Iterative decoding may then repeated using these noise-biased channel samples.
- the final decisions may be delivered to receiving user or application 166 (or 118 of FIG. 1A ) after being processed by the iterative decoding process, e.g., the iterative decoding process shown in FIG. 1A , FIG. 1B , FIG. 5 , and/or FIG. 7 .
- Receiving user or application 166 (or 118 of FIG. 1A ) may correspond to the same device or entity as transmitting user or application 150 (or 102 of FIG. 1A ), or it may correspond to a different device or entity. Further, receiving user or application 166 (or 118 of FIG. 1A ) may be either co-located or physically separated from transmitting user or application 150 (of 102 of FIG. 1A ).
- the message estimate delivered to receiving user or application 166 may be a logical replica of the message output by transmitting user or application 150 (or 102 ). Otherwise, the iterative decoder may declare error and the communication system will go into retry mode to try to recover the data, or send the erroneous data to the HOST if iterative decoder can not detect the error (this happens if the iterative decoder converged to a valid codeword, but not the one that was transmitted). In the latter case, the message estimate may differ from the message.
- FIG. 2 shows an illustrative example of the properties of a parity check matrix in accordance with some embodiments.
- Parity check matrix 222 of equation 220 is a matrix of size [r ⁇ n], where r satisfies the inequality r ⁇ n ⁇ k.
- parity check matrix 222 is multiplied by codeword 212 , the result is zero-vector 226 , which is a vector of size [r ⁇ 1] where all elements equal zero.
- Parity check matrix 222 is any matrix that produces a null matrix, or a matrix of all zeros, of size [r ⁇ k] when multiplied by generator matrix 214 .
- parity check matrix 222 is not unique, and may be chosen, for example, to minimize resources such as storage and/or processing time.
- FIG. 3 shows an illustrative example of parity check matrix 304 in accordance with some embodiments.
- Parity check matrix 304 may be used in the decoding process, for example, of communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ).
- Parity check matrix 304 includes [p ⁇ q] submatrices labeled, B 1,1 , B 1,2 , etc., as shown in FIG. 3 .
- Each labeled submatrix is a [b ⁇ b] cyclic matrix (that is, each submatrix has b rows and b columns), and for a given submatrix, each row is a shifted version of the first row.
- submatrix 306 is an illustrative submatrix (representing the submatrix having row index i and column index j, without loss of generality).
- the first row of submatrix 306 is [0 0 1 0 0], and each subsequent row is a cyclically shifted version of the first row.
- the second row of submatrix 306 is equal to the first row but with every element (cyclically) shifted one position to the right
- the third row of submatrix 306 is equal to the first row but with every element (cyclically) shifted two positions to the right, and so on.
- Submatrix 308 is another illustrative example of a suitable submatrix for parity check matrix 304 .
- parity check matrix 304 may be all zero matrices, where a zero-matrix is a matrix for which each element is equal to zero (a matrix with at least one non-zero element will be referred to as a non-zero matrix). Parity check matrix 304 is known as a quasi-cyclic matrix since it may be represented by a number of square and non-overlapping submatrices, each of which is a cyclic matrix.
- FIG. 4A shows an illustrative parity check matrix 402 for an LDPC code.
- Bit nodes 406 are labeled v 1 , v 2 , . . . v 9 in FIG. 4A , and each bit node corresponds to a single column of parity check matrix 402 .
- the first bit node, v 1 corresponds to the first column in parity check matrix 402
- the second bit node, v 2 corresponds to the second column in parity check matrix 402 , and so on (the meaning of this correspondence will be further described in FIG. 4B ).
- the value of r denotes the number of check nodes used in the iterative decoder.
- Check nodes 406 are labeled s 1 , s 2 , . . . s 6 , and each check node corresponds to a single row of parity check matrix 402 .
- the first check node, s 1 corresponds to the first row in parity check matrix 402
- the second check node, s 2 corresponds to the second row in parity check matrix 402 , and so on (the meaning of this correspondence will be further described in FIG. 4B ).
- FIG. 4B graphically illustrates sample decoding process 450 , which may be used by an iterative decoder to decode received samples, for example, received samples 110 ( FIG. 1 ), to produce a message estimate.
- sample decoding process 450 may be used by an iterative decoder to decode received samples, for example, received samples 110 ( FIG. 1 ), to produce a message estimate.
- parity check matrix 402 FIG. 4A
- FIG. 4B graphically illustrates sample decoding process 450 , which may be used by an iterative decoder to decode received samples, for example, received samples 110 ( FIG. 1 ), to produce a message estimate.
- parity check matrix 402 FIG. 4A
- Tanner graph 452 is a graphical representation of the LDPC code defined by parity check matrix 402 .
- a line may be drawn connecting a given bit node to a given parity check node if and only if a “1” is present in the corresponding entry in parity check matrix 402 .
- a line is drawn connecting the second check node (S 2 ) to the fifth bit node (V 5 ).
- S 2 the second check node
- V 5 the fifth bit node
- An iterative decoder may use an iterative two-step decoding algorithm that includes a check nodes update step and a bit nodes update step, as depicted by process 462 , to decode received samples, such as received samples 110 ( FIG. 1 ).
- a check nodes update step a bit nodes update step
- process 462 a bit nodes update step
- messages are sent from bit nodes 454 to check nodes 456 for every pair of connected nodes (this “flow” of information is denoted by the right-pointing arrow in FIG. 4B ).
- Check nodes 454 then perform computations based on the messages that are received.
- bits nodes update step messages are sent from bit nodes 456 to check nodes 454 for every pair of connected nodes (this is denoted by the left-pointing arrow in FIG. 4B ).
- Process 462 is repeated until either the codeword has been decoded or until a threshold number of rounds has been reached. Process 462 may then produce a message estimate.
- the content of messages sent during each step of each iteration of process 462 depends on the decoding algorithm used in the iterative decoder. The content may correspond to LLR values, probabilities, hard decisions, or any other suitable type of content that is compatible with the decoding algorithm.
- FIG. 5 shows structures and processes that may be used by an iterative decoder, such as the iterative decoder of communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ), in some embodiments to produce a message estimate from received samples (such as received samples 110 of FIG. 1A ).
- an iterative decoder such as the iterative decoder of communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ), in some embodiments to produce a message estimate from received samples (such as received samples 110 of FIG. 1A ).
- Iterative code encoder 500 , channel 502 , analog front-end 504 , analog-to-digital converter 506 , FIR filter 508 , BCJR/SOVA channel detector 512 , and LDPC decoder 514 may function similarly or identical to iterative code encoder 152 , channel 154 , analog front-end 156 , analog-to-digital converter 158 , FIR filter 160 , BCJR/SOVA channel detector 162 , and LDPC decoder 164 , of FIG. 1B , respectfully.
- Communications system 500 may be advantageous in improving (i.e., lowering) the error-rate performance of a communications system, such as communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ).
- Communications system 500 may use a soft-output Viterbi algorithm (also known as SOVA algorithm) to decode the output of FIR filter 508 .
- LDPC decoder 514 may use a maximum a posteriori algorithm, for example, the BCJR algorithm, decode the output of FIR filter 508 .
- the output of BCJR/SOVA channel detector 512 may be in the form of LLRs.
- the output of BCJR/SOVA channel detector 512 may be input to LDPC decoder 514 .
- LDPC decoder 514 may produce a preliminary estimate of a message using processes similar or identical to the processes described in FIGS. 4A and 4B .
- LDPC decoder 514 may use an iterative decoding algorithm including the min-sum or sum-product algorithm, or any suitable approximation or variation thereof, to produce output.
- Noise-biasing 510 may be added to the output of FIR filter 508 to produce input to BCJR/SOVA channel detector 512 .
- Noise-biasing 510 may be useful at least to prevent incorrect convergence or non-convergence of the iterative decoder to incorrect values.
- Noise-biasing 510 may use noise samples that are produced in a pseudo-random manner. For example, samples for noise-biasing 510 may be generated using a random sample generator such as an additive white Gaussian noise (AWGN) sample generator (or approximations thereof), or by using noise-like features already present in a communications system, such as communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ).
- AWGN additive white Gaussian noise
- Estimation error or the effects of thermal noise may be processed or used directly as samples for noise-biasing 510 .
- noise biasing is only used in a retry mode. That is, first iterative decoding is performed without noise-biasing. Then, if decoding failure is detected, noise-biasing is applied to move the decoder from a suboptimal point.
- noise-biasing 510 may be used.
- samples for noise-biasing 510 may not be directly added to the output of FIR filter 508 , but rather, may be multiplied, linearly weighted, combined, or otherwise transformed in a general manner before being combined with received channel samples.
- noise-basing 510 may include generating any number of noise samples, including a number of noise samples that is equal to the number of received samples, for example, the number of samples in received samples 110 .
- a further embodiment of noise-biasing 510 will be described in relation to FIG. 7 .
- Communications system 500 may continue iterative processing between BCJR/SOVA channel detector 512 and LDPC decoder 514 for a certain number of iterations before terminating the iterative process and outputting a message estimate. For example, communications system 500 may run for a fixed and predetermined number of iterations, until convergence of LLRs values and/or until a output having zero syndrome weight is determined. If the iterative decoder still fails, then generate another set of noise by permuting or changing noise scalar, and re-run the decoder.
- FIG. 6 depicts an illustrative error-rate curve for a code, such as an LDPC code, decoded using an iterative decoder such the iterative decoder of communications system 500 ( FIG. 5 ).
- FIG. 6 may be generated by measuring the performance of a communications system, for example, communications system 500 ( FIG. 5 ).
- Plot 600 depicts the signal-to-noise ratio (SNR) measured in decibels, for example, measured at the output of a demodulator such as demodulator 108 ( FIG. 1A ), on the horizontal axis and the error-rate, for example measured by comparing a message estimate to an originally transmitted message on the vertical axis.
- SNR signal-to-noise ratio
- the error-rate may be any suitable error-rate, including the bit-error-rate or the symbol error-rate.
- Plot 600 illustrates the general shape of the error-rate curve for a communications system. The exact shape of the error-rate curve may differ slightly from plot 600 as the shape is dependent on many factors, including the codebook used by an encoder such as iterative code encoder 106 ( FIG. 1 ), the characteristics of the channel (e.g., channel 106 of FIG. 1A ), and the algorithms used by a demodulator, e.g., demodulator 108 ( FIG. 1A ) to produce received samples (e.g., received samples 110 of FIG. 1A ).
- Waterfall region 602 includes a range of (low) SNR values for which the error-rate decreases relatively rapidly in SNR, while error-floor region 604 depicts a range of (high) SNR values for which the error-rate decreases slowly in SNR. Additionally, the error-rate is approximately invariant to an increase in the SNR in the region 606 , which is included within region 604 .
- Non-near codeword errors contribute largely to the shape of plot 600 .
- Non-near codeword errors may be characterized as errors that result when the output of the iterative decoder produces an incorrect candidate codeword that differs in many positions from the true codeword.
- Non-near codeword errors may also be characterized by a relatively large number of symbol errors and a relatively large syndrome weight.
- non-near codewords may produce a large number of ones when multiplied by a parity check matrix used, for example, by iterative code encoder 104 ( FIG. 1A ).
- a non-near codeword error may indicate that the iterative decoder (e.g., as shown in FIG.
- error-rate performance in waterfall region 602 may be readily increased (i.e., the error-rate may be lowered) by increasing the SNR.
- near codeword errors contribute largely to the shape of plot 600 .
- Near codeword errors may be characterized as errors that result when the output of the iterative decoder produces an incorrect candidate codeword that differs in a small number of positions from the true codeword.
- Near codeword errors may also characterized by a small syndrome weight, i.e., a near-codeword produces a relatively small number of non-zero values when multiplied by a parity check matrix.
- error-rate performance cannot be significantly improved in error-floor region 604 by increasing the SNR.
- error-rate performance in region 606 is nearly invariant to increases in the SNR.
- the error-floor 607 may impose a limit on the error-rate performance of a communications system such as communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ), particularly in high-SNR applications such as magnetic storage and/or recording applications (error-floor 607 refers to the lowest error-rate than may be achieved by a communications system at some maximum SNR value).
- error-floor 607 may not be sufficiently low to ensure desirable performance. For example, in a magnetic recording applications, an extremely low sector-error-rate may be required (e.g., an error-rate of 10 ⁇ 10 or 10 ⁇ 15 may be required).
- Error-floor 607 may indicate that that a corresponding iterative decoder, such as the iterative decoder of FIG. 5 , is “trapped” in an incorrect state.
- the message values passed in the iterative decoder may have converged to incorrect values.
- a iterative decoder may repeatedly output the same (incorrect) hard decisions during successive iterations of the iterative decoder operation, resulting in poor error-rate performance.
- Noise-biasing may prevent an iterative decoder (e.g., iterative decoder 500 of FIG. 5 ) from becoming trapped in an incorrect state by continually adding pseudo-random noise to create biased channel samples, thereby ensuring that inputs to the iterative decoder are different from the original set, and as such may help iterative detector to avoid getting absorbed in a sub-optimal point.
- noise-biasing 510 FIG. 5
- noise-biasing 510 may drive an iterative decoder (e.g., the iterative decoder 500 of FIG. 5 ) to a different state.
- noise-biasing e.g., noise-biasing 510 of FIG. 5
- an iterative decoder e.g, iterative decoder 500 of FIG. 5
- noise-biasing may drive an iterative decoder (e.g., iterative decoder 500 of FIG. 5 ) to convergence to a stable state. If noise-biasing (e.g., noise-biasing 510 of FIG. 5 ) is properly designed, this stable state may result in an iterative decoder producing a message estimate equal to a message, as desired.
- FIG. 7 illustrates the integration of noise-biasing in the system and processes of an iterative decoder.
- Iterative decoder 700 may be a more detailed embodiment of the iterative decoding process depicted in FIG. 5 ). Iterative decoder 700 may obtain received samples from the output of FIR filter 710 for a memory channel, described in FIG. 1B , or received samples may correspond to received channel samples as shown in FIG. 1A for the case of channel without memory.
- FIR filter 710 may provide filtered or non-filtered received samples obtained from a channel such as channel 106 of FIG. 1A or channel 154 of FIG.
- FIR filter may provide received samples obtained by a hard drive read-head after amplification and analog-to-digital processing in a magnetic storage system).
- the output of FIR filter 710 may then be stored in FIR memory storage module 720 .
- Samples for noise-biasing may also be stored in FIR memory storage module 720 .
- Samples for noise-biasing (e.g., samples for noise-biasing 510 ( FIG. 5 )) may be obtained as described below, except possibly for the initial iteration of the iterative decoder.
- Samples for noise-biasing may be scaled by a factor determined in scaling factor module 735 to obtain scaled samples for noise-biasing 737 .
- scaling factor module 735 may use a gain device 730 that suitably increases or decreases the energy values of the stored samples for noise-biasing according to any suitable method so that the scaled samples for noise-biasing 737 are of energy-levels that are at a desired relative value compared to the energy-levels of output of FIR filter 710 .
- Storing samples for noise-biasing in FIR memory module 720 is advantageous at least to reduce the memory usage of an iterative decoder, such as iterative decoder 700 , when used in a communications system such as communications system 100 ( FIG. 1A ) or communications system 150 ( FIG. 1B ).
- Adder 740 may add scaled samples for noise-biasing 737 to the output of FIR filter 710 to produce biased samples 745 .
- Biased samples 745 may then be input to iterative decoder 750 .
- iterative decoder 750 may be an LDPC decoder that may be operating according to communications system 100 ( FIG. 1A ) or 150 ( FIG. 1B ).
- the output of iterative decoder 750 may contain information, for example, LLR values corresponding to received samples such as received samples 110 . This information may be converted to hard decision estimates of the codeword and stored in memory by hard decision memory storage 760 .
- reconstruction filter 765 may estimate the output of FIR filter 710 by applying a filter to the values stored in hard decision memory storage 760 .
- Adder 770 may then determine the difference between the output of reconstruction filter 765 and the output of FIR filter 710 to obtain a pseudo-random noise sequence 780 .
- Pseudo-random noise sequence 780 may then be stored in FIR memory storage module 720 , possibly by overwriting or replacing the initial memory values stored in FIR memory storage module 720 .
- noise-biasing is run in a retry mode, i.e., after regular iterative decoder fails.
- hard decision memory storage 760 contains estimates for transmitted bits. These estimates contain errors (as regular iterative decoding has failed). However, most of the bits may be correct and only a fraction of the bits may be in error. Thus, based on the estimates in hard decision memory storage 760 , corresponding reconstructed filer output 765 may be obtained (these are ideal, noise free, FIR samples corresponding to the bits in hard decision memory storage 760 transmitted over the channel). These samples may then be written back to FIR memory storage module 720 for later use as samples for noise-biasing.
- the iterative decoder may be run again, this time generating channel samples by adding scaled noise-biased samples to the original channel samples. Therefore, the described process includes three main steps. First, the iterative decoder is run, then samples for noise-biasing are obtained, and third, the iterative decoder is rerun with samples that have been augmented by the samples for noise-biasing.
- Iterative decoder 750 may continue to process the messages in the manner described. However, permutation control 790 may be used to permute pseudo-random noise sequence 780 and/or samples for noise-biasing (e.g., samples for noise-biasing 510 of FIG. 5 ) stored in FIR memory storage before starting each outer iteration of the iterative decoder. Permutation control 790 may generate different (effective) sets of samples for noise-biasing at each iteration. When the iterative decoding process fails, the iterative decoder, e.g., iterative decoder 750 , enters a noise-biasing mode.
- the iterative decoder e.g., iterative decoder 750 , enters a noise-biasing mode.
- permutation control 790 may be used to change the samples for noise-biasing and the inner iterative decoding process may be rerun with the new samples for noise-biasing.
- the iterative decoder returns to the normal decoding mode.
- Permuting the stored samples for noise-biasing may be advantageous as it may reduce or eliminate the correlation in closely spaced samples.
- a deterministic or random interleaver may be used by permutation control 790 to permute the stored samples for noise-biasing.
- FIG. 8 illustrates an interleaving system and technique that may be used to operate a permutation control such as permutation control 790 ( FIG. 7 ) according to some embodiments.
- Interleaver 800 may be used to permute a set of V samples for noise-biasing (e.g., samples for noise-biasing 510 of FIG. 5 ) that are stored, for example, in FIR memory storage module 720 of FIG. 7 (V may be any suitable positive integer) and used to remove or reduce the correlation among the closely spaced samples for noise-biasing.
- V may be any suitable positive integer
- Interleaver 800 may write the samples for noise-biasing into memory slots in row-order, as depicted in FIG. 8 .
- interleaver 800 may write samples X 1 , . . .
- interleaver 800 may read samples for noise-biasing in column-order. For example, interleaver 800 may read samples X 1 , X d+1 , . . . , X (p ⁇ 1)d+1 from first column 810 , samples X 2 , XX d+2 , . . . , X (p ⁇ 1)d+2 from second column 812 , and so on.
- the correlation between closely spaced samples for noising biasing may be reduced compared to an iterative decoder that does not include a permutation control such as permutation control 790 ( FIG. 7 ).
- Interleaver 800 may be particularly useful for removing correlation among samples for noise-biasing if the values of the noise samples (e.g., noise samples 123 of FIG. 1 ) are stored, for example, in FIR memory storage module 720 ( FIG. 7 ) are never changed, or if they are changed only infrequently.
- empty (or ‘NULL’) memory samples for noise-biasing may be created as placeholder values so that suitable values of p and d may be chosen.
- Interleaver 800 may also choose the values of p, d, and/or the sequence X 1 , X 2 , . . . , X pd so that they are different at different instants of time. For example, if a decoding failure is repeatedly encountered, interleaver 800 may change one or more of these values upon each consecutive iteration until either a codeword is decoded correctly or until a specified number of decoding iterations have been executed.
- interleaver 800 may use non-overlapping samples, for example, chosen from a common time-invariant set of samples, for noise-biasing during different outer iterations of the iterative decoder.
- Non-overlapping samples may be chosen according to an offset position P and/or a incremental step size, referred to as P step .
- P step a incremental step size
- samples X 1 , X 2 , . . . , X pd may be read sequentially input into interleaver 800 . After a decoding failure, a shifted version of these samples may be used.
- the samples may be used in the following order: X 12 , X 13 , . . . , X pd , X 1 , . . . , X 11 .
- the ordering of the samples for noise-biasing may be changed or modified.
- the value of P can be, for example, increased by an amount P step at the start of each new outer iteration of the iterative decoder.
- FIG. 9 illustrates a hardware and firmware interface that may be used by a scaling factor module such as scaling factor module 735 ( FIG. 7 ) to determine a scaling factor in accordance with some embodiments.
- the scaling factor determined by interface 900 may be used, for example, with a gain device such as gain device 730 ( FIG. 7 ), and may be included in an iterative decoder, such as iterative decoder 700 ( FIG. 7 ).
- Interface 900 may calculate a scaling factor based on information provided by hardware 902 and/or computations performed by firmware 904 .
- the desired value of the scaling factor may depend on many parameters including the target magnitude (or average energy-level) of the samples for noise-biasing (e.g., samples for noise-biasing 510 of FIG.
- firmware 904 may not have direct access to some or all of these quantities in which case firmware 904 may determine or estimate the unknown or partially known quantities.
- firmware 904 may set the desired (or target) magnitude of the biasing noise to be a relatively large value so that the iterative decoder (for example, iterative decoder 700 of FIG. 7 ) is prevented from re-converging to a non-near codeword.
- firmware 904 may set the desired magnitude of the biasing noise to be a relatively small value so that the iterative decoder may converge to the correct codeword.
- firmware 904 may report to firmware 904 the energy-levels of error samples, including, for example, the error samples produced by pseudo-random noise such as pseudo-random noise 780 of FIG. 7 .
- Firmware 904 may then use any suitable averaging or weighting algorithm to determine the mean magnitude of the noise source. For example, firmware 904 may use an arithmetic average, a least-square average, or a general weighted average to determine the mean magnitude of the noise source.
- hardware 902 may directly provide firmware 904 with an estimate of the average magnitude of the error samples.
- Interface 900 may include an error recovery mode during which hardware 902 sends information to firmware 904 and in which firmware 904 computes a scaling factor.
- hardware 902 may send a control signal using control line 906 to firmware 904 indicating that decoding of a codeword has completed.
- Hardware 902 may also send information on the success or failure of the decoding through data line 908 , including information on the type of failure, if a failure occurred.
- hardware 902 may indicate that a near or non-near codeword error occurred and/or may send the syndrome weight of the decoded codeword using data line 908 .
- Firmware 904 may then determine a scaling factor using, for example, any combination of the methods described above.
- Firmware 904 may then determine and send scaling factor information back to hardware using data line 910 and/or control line 912 .
- Hardware 902 may use this scaling factor information to adjust the amplitudes (or energy-levels) of samples for noise-biasing (e.g., samples for noise-biasing 510 of FIG. 5 ).
- hardware 902 may use the scaling factor with a gain device such as gain device 730 of FIG. 7 to adjust the amplitudes of the samples for noise-biasing.
- FIG. 10 shows an illustrative process that may be used to determine the scaling factor in an iterative decoder, such as iterative decoder 700 ( FIG. 7 ).
- Process 1000 may be performed in firmware 904 ( FIG. 9 ) and/or may be included in scaling factor module 735 ( FIG. 7 ).
- Process 1000 may begin at step 1010 .
- initial values for the scaling factor denoted by the parameter scaling_factor, and offset position P may be determined.
- the value P represents an offset used to select samples for noise-biasing from a possibly larger set of candidate noise samples.
- Initial values for scaling_factor and P may be set based on look-up tables, minimum or maximum possible values for these quantities, and/or based on any other suitable technique.
- scaling_factor and/or P may be provided to hardware, for example, hardware 902 ( FIG. 9 ). Values for scaling_factor and/or P may be delivered using data line 910 ( FIG. 9 ) and/or control line 912 ( FIG. 9 ). The hardware may use the value of P to selected samples for noise-biasing from, for example, FIR memory storage module 720 ( FIG. 7 ). The hardware may use the value of scaling_factor to produce or otherwise obtain scaled samples for noise-biasing 737 ( FIG. 7 ).
- hardware 902 may rerun an iterative decoder with the scaled samples for noise-biasing 737 ( FIG. 7 ), for example, hardware 902 ( FIG. 9 ) may rerun iterative decoder 750 ( FIG. 7 ).
- the presence of absence of a decoding failure may be determined based on the output of the iterative decoder used in step 1030 .
- communications system 100 FIG. 1
- the communications system may output the decoded codeword as a message estimate, where the message estimate may be a replica of the originally transmitted message. If it is determined at step 1040 that a decoding error did occur, then process 1000 continues at step 1060 .
- the value of a counter cont_failure is increased by one integer.
- the value of cont_failure indicates the number of consecutive iterations of process 1000 for which the scale factor has remained constant (i.e., the cont_failure denotes the number of consecutive iterations for which same target noise level has been used).
- P step new noise samples may be selected to replace P step “old” samples (i.e., samples used in the previous iteration of process 1000 ).
- P step new samples may be selected from, e.g., FIR memory storage module 720 ( FIG. 7 ) or they may be generated using, e.g., a random noise generator and/or various noise-like characteristics present in the communications system (e.g., communications system 100 of FIG. 1A or communications system 150 of FIG. 1B ).
- process 1000 may return to step 1020 and continue a next iteration of process 1000 as described above.
- FIG. 11 shows another illustrative process that may be used to determine the scaling factor in an iterative decoder, such as iterative decoder 700 ( FIG. 7 ).
- Process 1100 may be performed in firmware 904 ( FIG. 9 ) and/or may be included in scaling factor module 735 ( FIG. 7 ).
- Process 1100 may begin at step 1110 , where initial values for scaling_factor and P may be determined.
- the values of scaling_factor and P may be sent to hardware such as hardware 902 ( FIG. 9 ), and at step 1120 , an iterative decoder may be rerun using these values of scaling_factor and P.
- process 1100 may determine whether a decoding failure occurred at step 1120 .
- process 1100 may continue to step 1130 and a next codeword may be decoded. If a decoding failure is determined to have occurred at step 1125 , process 1100 may proceed to step 1135 . Steps 1110 , 1115 , 1120 , 1125 , and 1130 may be executed similarly or identically to steps 1010 , 1020 , 1030 , 1040 , and 1050 , respectively, of process 1000 ( FIG. 10 ).
- process 1100 may determine the syndrome weight of the current errant codeword and compare this syndrome weight to the syndrome weight of the previous errant codeword (i.e., in the previous iteration of process 1100 ). If the magnitude of the difference of these syndromes weights, denoted by syndrome_weight_difference, is greater than or equal to a lower threshold THRESH_LOW and less than or equal to an upper threshold THRESH_HIGH, then process 1100 may continue to step 1150 .
- step 1160 the value of curr_noise_type is set to zero (indicating that the magnitude of scaling factor may be too small to correctly decode the current codeword).
- step 1180 the value of curr_noise_type is set to one (indicating that the magnitude of scaling factor may be too large to correctly decode the current codeword).
- step 1180 the value of curr_noise_type is set to one (indicating that the magnitude of scaling factor may be too large to correctly decode the current codeword).
- the value of the parameter cont_sim_failure may be increased by one, e.g., to indicate that the scaling factor has not been correctly determined in the current iteration, and process 1100 may continue to step 1190 .
- cont_sim_failure may be compared to a threshold value simFailuresThresh. If cont_sim_failure is greater than or equal to simFailuresThresh, then process 1100 may continue to step 1195 , where the scaling factor is changed by changing the value of the parameter scaling_factor.
- scaling_factor may be increased or decreased according to a fixed step size, or may be changed based on any of the parameters, counters, or variables present in process 1100 .
- Process 1100 may then continue to step 1150 . If cont_sim_failure is less than simFailuresThresh at step 1190 , then the value of the scaling factor may be chosen properly for decoding the current codeword, and process 1100 may continue to step 1197 .
- the parameter cont_sim_failure may be used to track the number of consecutive decoding failures of the current codeword with the current value of the scaling factor and cont_sim_failure may be set to the value zero (e.g., signifying that the current value of the scaling factor may be appropriately set by process 1100 ).
- Process 1100 may then continue to step 1197 .
- process 1100 may increase the value of P by an amount P step .
- P step new noise samples may be selected to replace P step samples used in the previous iteration of process 1100 . These P step new samples may be selected similarly or identically to the manner described in step 1090 of process 1000 ( FIG. 10 ).
- process 1100 may return to step 1115 and continue a next iteration of process 1100 , as described above.
- Step 1010 of process 1000 ( FIG. 10 ) and step 1110 of process 1100 both include the step of initializing the scaling factor, denoted by parameter scaling_factor, by some technique.
- the scaling factor may be set based on the syndrome weight of the errantly decoded codeword. In this scheme, a relatively large value for the scaling factor may be chosen when the syndrome weight is small (as this may indicate a near-codeword type error), and a relatively small value for the scaling factor may be chosen when the syndrome weight is large (as this may indicate a non-near codeword type error).
- the scaling factor may be chosen to be a certain minimum (or with minor modifications to process 1000 of FIG.
- a set of candidate scaling factors will then be chosen by process 1000 ( FIG. 10 ), in monotonically-increasing order, until either a codeword is correctly decoded or until the iterative algorithm terminates. However, process 1000 ( FIG. 10 ) may potentially skip over a desirable value of the scaling factor.
- candidates values of the scaling factor may be chosen in non-monotonic order, but may also enter a loop in which only a small set of scaling factor values are tested. Alternatively, the scaling factor may be chosen based on experience and/or pre-existing data.
- scaling_factor may be adjusted, for example, by an incremental value denoted by scaling_step.
- the value of scaling_step may be adjusted dynamically in either process 1000 ( FIG. 10 ) or process 1100 , and doing so may provide faster and/or more accurate decoding of a codeword.
- scaling_step may be set based on the syndrome weight of the errantly decoded codeword.
- a relatively large value for scaling_step may be chosen when the syndrome weight is small (as this may indicate a near-codeword type error), and a relatively small value for scaling_step may be chosen when the syndrome weight is large (as this may indicate a non-near codeword type error).
- Decoding accuracy and speed when adapting the value of scaling_step may be dependent on the initial value chosen for scaling_step and either process 1000 ( FIG. 10 ) or process 1100 could miss one or several desirable values of scaling_factor.
- scaling_step may be chosen and/or designed so that all desired values of scaling_factor are chosen during successive iterations of process 1000 ( FIG. 10 ) or process 1100 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/357,200 US8745468B1 (en) | 2008-01-21 | 2009-01-21 | Iterative decoding systems using noise-biasing |
US14/279,217 US9256492B1 (en) | 2008-01-21 | 2014-05-15 | Iterative decoding systems using noise-biasing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US2244008P | 2008-01-21 | 2008-01-21 | |
US12/357,200 US8745468B1 (en) | 2008-01-21 | 2009-01-21 | Iterative decoding systems using noise-biasing |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/279,217 Continuation US9256492B1 (en) | 2008-01-21 | 2014-05-15 | Iterative decoding systems using noise-biasing |
Publications (1)
Publication Number | Publication Date |
---|---|
US8745468B1 true US8745468B1 (en) | 2014-06-03 |
Family
ID=50781420
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/357,200 Active 2032-07-29 US8745468B1 (en) | 2008-01-21 | 2009-01-21 | Iterative decoding systems using noise-biasing |
US14/279,217 Active US9256492B1 (en) | 2008-01-21 | 2014-05-15 | Iterative decoding systems using noise-biasing |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/279,217 Active US9256492B1 (en) | 2008-01-21 | 2014-05-15 | Iterative decoding systems using noise-biasing |
Country Status (1)
Country | Link |
---|---|
US (2) | US8745468B1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130235835A1 (en) * | 2011-02-28 | 2013-09-12 | Nec (China) Co., Ltd. | Method and apparatus of performing outer loop link adaptation operation |
US20160285475A1 (en) * | 2015-03-25 | 2016-09-29 | Topcon Positioning Systems, Inc. | Method and apparatus for identification and compensation for inversion of input bit stream in ldpc decoding |
US20170041027A1 (en) * | 2013-10-28 | 2017-02-09 | Topcon Positioning Systems, Inc. | Method and device for measuring the current signal-to-noise ratio when decoding ldpc codes |
US20190036548A1 (en) * | 2015-11-05 | 2019-01-31 | Shenzhen Epostar Electronics Limited Co. | Permutation network designing method, and permutation circuit of qc-ldpc decoder |
US10707902B2 (en) * | 2015-11-05 | 2020-07-07 | Shenzhen Epostar Electronics Limited Co. | Permutation network designing method, and permutation circuit of QC-LDPC decoder |
TWI707231B (en) * | 2018-09-28 | 2020-10-11 | 大陸商深圳大心電子科技有限公司 | Decoder design method and storage controller |
US10998920B1 (en) | 2020-02-26 | 2021-05-04 | Apple Inc. | Overcoming saturated syndrome condition in estimating number of readout errors |
US11502703B2 (en) * | 2020-05-20 | 2022-11-15 | SK Hynix Inc. | Descrambler for memory systems and method thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438180B1 (en) * | 1997-05-09 | 2002-08-20 | Carnegie Mellon University | Soft and hard sequence detection in ISI memory channels |
US20030023919A1 (en) * | 2001-07-12 | 2003-01-30 | Yuan Warm Shaw | Stop iteration criterion for turbo decoding |
US6518892B2 (en) * | 2000-11-06 | 2003-02-11 | Broadcom Corporation | Stopping criteria for iterative decoding |
US20080301517A1 (en) * | 2007-06-01 | 2008-12-04 | Agere Systems Inc. | Systems and methods for ldpc decoding with post processing |
US20100146371A1 (en) * | 2005-09-30 | 2010-06-10 | Andrey Gennadievich Efimov | Modified Turbo-Decoding Message-Passing Algorithm for Low-Density Parity Check Codes |
US7739558B1 (en) * | 2005-06-22 | 2010-06-15 | Aquantia Corporation | Method and apparatus for rectifying errors in the presence of known trapping sets in iterative decoders and expedited bit error rate testing |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1545082A3 (en) * | 2003-12-17 | 2005-08-03 | Kabushiki Kaisha Toshiba | Signal decoding methods and apparatus |
-
2009
- 2009-01-21 US US12/357,200 patent/US8745468B1/en active Active
-
2014
- 2014-05-15 US US14/279,217 patent/US9256492B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438180B1 (en) * | 1997-05-09 | 2002-08-20 | Carnegie Mellon University | Soft and hard sequence detection in ISI memory channels |
US6518892B2 (en) * | 2000-11-06 | 2003-02-11 | Broadcom Corporation | Stopping criteria for iterative decoding |
US20030023919A1 (en) * | 2001-07-12 | 2003-01-30 | Yuan Warm Shaw | Stop iteration criterion for turbo decoding |
US7739558B1 (en) * | 2005-06-22 | 2010-06-15 | Aquantia Corporation | Method and apparatus for rectifying errors in the presence of known trapping sets in iterative decoders and expedited bit error rate testing |
US20100146371A1 (en) * | 2005-09-30 | 2010-06-10 | Andrey Gennadievich Efimov | Modified Turbo-Decoding Message-Passing Algorithm for Low-Density Parity Check Codes |
US20080301517A1 (en) * | 2007-06-01 | 2008-12-04 | Agere Systems Inc. | Systems and methods for ldpc decoding with post processing |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130235835A1 (en) * | 2011-02-28 | 2013-09-12 | Nec (China) Co., Ltd. | Method and apparatus of performing outer loop link adaptation operation |
US9252851B2 (en) * | 2011-02-28 | 2016-02-02 | Nec (China) Co., Ltd. | Method and apparatus of performing outer loop link adaptation operation |
US20170041027A1 (en) * | 2013-10-28 | 2017-02-09 | Topcon Positioning Systems, Inc. | Method and device for measuring the current signal-to-noise ratio when decoding ldpc codes |
US9793928B2 (en) * | 2013-10-28 | 2017-10-17 | Topcon Positioning Systems, Inc. | Method and device for measuring the current signal-to-noise ratio when decoding LDPC codes |
US20160285475A1 (en) * | 2015-03-25 | 2016-09-29 | Topcon Positioning Systems, Inc. | Method and apparatus for identification and compensation for inversion of input bit stream in ldpc decoding |
US9621189B2 (en) * | 2015-03-25 | 2017-04-11 | Topcon Positioning Systems, Inc. | Method and apparatus for identification and compensation for inversion of input bit stream in Ldpc decoding |
US20190036548A1 (en) * | 2015-11-05 | 2019-01-31 | Shenzhen Epostar Electronics Limited Co. | Permutation network designing method, and permutation circuit of qc-ldpc decoder |
US10700708B2 (en) * | 2015-11-05 | 2020-06-30 | Shenzhen Epostar Electronics Limited Co. | Permutation network designing method, and permutation circuit of QC-LDPC decoder |
US10707902B2 (en) * | 2015-11-05 | 2020-07-07 | Shenzhen Epostar Electronics Limited Co. | Permutation network designing method, and permutation circuit of QC-LDPC decoder |
TWI707231B (en) * | 2018-09-28 | 2020-10-11 | 大陸商深圳大心電子科技有限公司 | Decoder design method and storage controller |
US10998920B1 (en) | 2020-02-26 | 2021-05-04 | Apple Inc. | Overcoming saturated syndrome condition in estimating number of readout errors |
US11502703B2 (en) * | 2020-05-20 | 2022-11-15 | SK Hynix Inc. | Descrambler for memory systems and method thereof |
Also Published As
Publication number | Publication date |
---|---|
US9256492B1 (en) | 2016-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9256492B1 (en) | Iterative decoding systems using noise-biasing | |
US8700973B1 (en) | Post-processing decoder of LDPC codes for improved error floors | |
US9438276B2 (en) | Method and apparatus for improved performance of iterative decoders on channels with memory | |
US8291292B1 (en) | Optimizing error floor performance of finite-precision layered decoders of low-density parity-check (LDPC) codes | |
US8601328B1 (en) | Systems and methods for near-codeword detection and correction on the fly | |
US8977941B2 (en) | Iterative decoder systems and methods | |
US8862970B1 (en) | Low power LDPC decoding under defects/erasures/puncturing | |
US8938660B1 (en) | Systems and methods for detection and correction of error floor events in iterative systems | |
US8499226B2 (en) | Multi-mode layered decoding | |
KR101337736B1 (en) | Error correction capability adjustment of ldpc codes for storage device testing | |
US10547328B1 (en) | Implementation of LLR biasing method in non-binary iterative decoding | |
US9397698B1 (en) | Methods and apparatus for error recovery in memory systems employing iterative codes | |
US9385753B2 (en) | Systems and methods for bit flipping decoding with reliability inputs | |
KR101021465B1 (en) | Apparatus and method for receiving signal in a communication system using a low density parity check code | |
US8341506B2 (en) | Techniques for correcting errors using iterative decoding | |
US9130589B2 (en) | Low density parity check decoder with dynamic scaling | |
US7949932B2 (en) | Strengthening parity check bit protection for array-like LDPC codes | |
WO2007027054A1 (en) | Soft decoding method and apparatus, error correction method and apparatus, and soft output method and apparatus | |
US8806289B1 (en) | Decoder and decoding method for a communication system | |
US8868999B1 (en) | Systems and methods for erasure correction of iterative codes | |
US8812929B1 (en) | Detecting insertion/deletion using LDPC code | |
US9020052B2 (en) | MIMO communication method and devices | |
Sparrer et al. | Communication over impulsive noise channels: Channel coding vs. compressed sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MARVELL INTERNATIONAL LTD., BERMUDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARVELL SEMICONDUCTOR, INC.;REEL/FRAME:022573/0219 Effective date: 20090120 Owner name: MARVELL SEMCONDUCTOR, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, YIFEI;VARNICA, NEDELJKO;BURD, GREGORY;REEL/FRAME:022573/0540 Effective date: 20090120 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
AS | Assignment |
Owner name: CAVIUM INTERNATIONAL, CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARVELL INTERNATIONAL LTD.;REEL/FRAME:052918/0001 Effective date: 20191231 |
|
AS | Assignment |
Owner name: MARVELL ASIA PTE, LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAVIUM INTERNATIONAL;REEL/FRAME:053475/0001 Effective date: 20191231 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |