US20110131462A1 - Matrix-vector multiplication for error-correction encoding and the like - Google Patents
Matrix-vector multiplication for error-correction encoding and the like Download PDFInfo
- Publication number
- US20110131462A1 US20110131462A1 US12/644,161 US64416109A US2011131462A1 US 20110131462 A1 US20110131462 A1 US 20110131462A1 US 64416109 A US64416109 A US 64416109A US 2011131462 A1 US2011131462 A1 US 2011131462A1
- Authority
- US
- United States
- Prior art keywords
- vector
- sub
- right arrow
- arrow over
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/61—Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
- H03M13/615—Use of computational or mathematical techniques
- H03M13/616—Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
Definitions
- the present invention relates to signal processing, and, in particular, to error-correction encoding and decoding techniques such as low-density parity-check (LDPC) encoding and decoding.
- LDPC low-density parity-check
- Low-density parity-check (LDPC) encoding is an error-correction encoding scheme that has attracted significant interest in recent years due in part to its ability to operate near the Shannon limit and its relatively low implementation complexity.
- LDPC codes are characterized by parity-check matrices, wherein, in each parity-check matrix, the number of elements in the matrix that have a value of one is relatively small in comparison to the number of elements that have a value of zero.
- various methods of performing LDPC encoding have been developed. For example, according to one relatively straightforward method, LDPC encoding may be performed by multiplying a generator matrix, derived from the parity-check matrix, by user data to generate LDPC codewords.
- the present invention is an apparatus comprising a matrix-vector multiplication (MVM) component that generates a product vector based on (i) an input matrix and (ii) an input vector.
- the MVM component comprises a permuter, memory, and an XOR gate array.
- the permuter for each input sub-vector of the input vector, permutates the input sub-vector based on a set of permutation coefficients to generate a set of permuted input sub-vectors.
- the set of permutation coefficients correspond to a current block column of the input matrix, and each permutation coefficient in the set corresponds to a different permutation of a sub-matrix in the current block column.
- the memory stores a set of intermediate product sub-vectors corresponding to the product vector.
- the XOR gate array for each input sub-vector, performs exclusive disjunction on (i) the set of permuted input sub-vectors and (ii) the set of intermediate product sub-vectors to update the set of intermediate product subvectors.
- the XOR gate array updates all of the intermediate product sub-vectors in the set based on a current input sub-vector before updating any of the intermediate product sub-vectors in the set based on a subsequent input sub-vector.
- a set of intermediate product sub-vectors corresponding to the product vector is stored in memory, and, for each input sub-vector, exclusive disjunction is performed on (i) the set of permuted input sub-vectors and (ii) the set of intermediate product sub-vectors to update the set of intermediate product sub-vectors. All of the intermediate product sub-vectors in the set are updated based on a current input sub-vector before updating any of the intermediate product sub-vectors in the set based on a subsequent input sub-vector.
- FIG. 1 shows one implementation of a parity-check matrix (aka H-matrix) that may be used to implement a low-density parity-check (LDPC) code;
- H-matrix parity-check matrix
- LDPC low-density parity-check
- FIG. 2 shows a simplified block diagram of one implementation of a signal processing device that may be used to encode data using an H-matrix such as the H-matrix of FIG. 1 ;
- FIG. 3 shows a simplified representation of an exemplary H-matrix in coefficient-matrix form
- FIG. 4 shows a simplified block diagram of a sparse-matrix-vector (SMV) component according to one embodiment of the present invention
- FIG. 5 shows a simplified representation of an H-matrix having a parity-bit sub-matrix in approximately lower triangular (ALT) form
- FIG. 6 shows a simplified block diagram of a signal processing device according to one embodiment of the present invention.
- FIG. 7 shows a simplified block diagram of a first parity-bit sub-vector component according to one embodiment of the present invention that may be used to implement the first parity-bit sub-vector component in FIG. 6 ;
- FIG. 8 shows a simplified block diagram of a forward substitution component according to one embodiment of the present invention.
- FIG. 9 shows a simplified block diagram of a matrix-vector multiplication component according to one embodiment of the present invention.
- FIG. 10 shows a simplified block diagram of a second parity-bit sub-vector component according to one embodiment of the present invention that may be used to implement the second parity-bit sub-vector component in FIG. 6 .
- FIG. 1 shows one implementation of a parity-check matrix 100 that may be used to implement a low-density parity-check (LDPC) code.
- each sub-matrix may be a zero matrix, an identity matrix, a circulant that is obtained by cyclically shifting an identity matrix, or a matrix in which the rows and columns are arranged in a more-random manner than an identity matrix or circulant.
- H-matrix 100 may be a regular H-matrix or an irregular H-matrix.
- a regular H-matrix is arranged such that all rows of the H-matrix have the same row hamming weight w r and all columns of the H-matrix have the same column hamming weight w c .
- a row's hamming weight refers to the number of elements in the row having a value of 1.
- a column's hamming weight refers to the number of elements in the column having a value of 1.
- An irregular H-matrix is arranged such that the row hamming weight w r of one or more rows differ from the row hamming weight w r of one or more other rows and/or the column hamming weight w c of one or more columns differ from the column hamming weight w c of one or more other columns.
- An H-matrix may also be arranged in non-systematic form or systematic form.
- the elements of the H-matrix that correspond to user data are interspersed with the elements of the H-matrix that correspond to parity bits.
- the H-matrix is arranged such that all elements of the matrix corresponding to user data are separated from all elements of the matrix corresponding to parity bits.
- H-matrix 100 is an example of an H-matrix in systematic form.
- H-matrix 100 has (i) an m ⁇ (n ⁇ m) sub-matrix H u (to the left of the dashed line) corresponding to user data, and (ii) an m ⁇ m sub-matrix H p (to the right of the dashed line) corresponding to parity bits.
- FIG. 2 shows a simplified block diagram of one implementation of a signal processing device 200 , which may be used to encode data using an H-matrix such as H-matrix 100 of FIG. 1 .
- Signal processing device 200 may be implemented in a communications transmission system, a hard-disk drive (HDD) system, or any other suitable application.
- Upstream processing 202 of signal processing device 200 receives an input data stream from, for example, a user application, and generates a user-data vector ⁇ right arrow over (u) ⁇ for low-density parity-check (LDPC) encoding.
- the processing performed by upstream processing 202 may vary from one application to the next and may include processing such as error-detection encoding, run-length encoding, or other suitable processing.
- LDPC low-density parity-check
- LDPC encoder 204 generates a parity-bit vector ⁇ right arrow over (p) ⁇ based on the user-data vector ⁇ right arrow over (u) ⁇ and a parity-check matrix (i.e., H-matrix) and outputs the parity-bit vector ⁇ right arrow over (p) ⁇ to multiplexer 206 .
- Multiplexer 206 receives the user-data vector ⁇ right arrow over (u) ⁇ and inserts the parity bits of parity-bit vector ⁇ right arrow over (p) ⁇ among the data bits of user-data vector ⁇ right arrow over (u) ⁇ to generate a codeword vector ⁇ right arrow over (c) ⁇ .
- one nibble (four bits) of parity data from parity-bit vector ⁇ right arrow over (p) ⁇ may be output after every ten nibbles (40 bits) of user data from user-data vector ⁇ right arrow over (u) ⁇ .
- the codeword vector ⁇ right arrow over (c) ⁇ is then processed by downstream processing 208 , which performs processing such as digital-to-analog conversion, pre-amplification, and possibly other suitable processing depending on the application.
- LDPC encoder 204 may be derived beginning with the premise that the modulo-2 product of the H-matrix and the codeword vector ⁇ right arrow over (c) ⁇ is equal to zero as shown in Equation (1):
- Equation (1) Equation (1) may be rewritten as shown in Equation (2):
- H u is an m ⁇ (n ⁇ m) sub-matrix of H corresponding to user data
- H p is an m ⁇ m sub-matrix of H corresponding to parity-check bits
- ⁇ right arrow over (u) ⁇ is an (n ⁇ m) ⁇ 1 user-data vector
- ⁇ right arrow over (p) ⁇ is an m ⁇ 1 parity-bit vector.
- Equation (2) may be rewritten as shown in Equation (3):
- Equation (3) may be solved for parity-bit vector ⁇ right arrow over (p) ⁇ as shown in Equation (4):
- Equation (5) Equation (5)
- parity-bit vector ⁇ right arrow over (p) ⁇ may be generated by (i) multiplying sub-matrix H u by user-data vector ⁇ right arrow over (u) ⁇ to generate vector ⁇ right arrow over (x) ⁇ , (ii) determining the inverse [H p ] ⁇ 1 of sub-matrix H p , and (iii) multiplying vector ⁇ right arrow over (x) ⁇ by [H p ] ⁇ 1 .
- Vector ⁇ right arrow over (x) ⁇ may be generated by permutating sub-vectors ⁇ right arrow over (u) ⁇ n of user-data vector ⁇ right arrow over (u) ⁇ and applying the permutated sub-vectors ⁇ right arrow over (u) ⁇ n to XOR logic.
- H-matrix 300 of FIG. 3 H-matrix 300 is depicted in coefficient-matrix (CM) form, where each element P j,k of H-matrix 300 corresponds to a block (i.e., a sub-matrix).
- CM coefficient-matrix
- H-matrix 300 is also arranged in systematic form having an 8 ⁇ 16 user-data sub-matrix H u and an 8 ⁇ 8 parity-bit sub-matrix H p .
- Each element P j,k of H-matrix 300 herein referred to as a permutation coefficient P j,k , that has a positive value or a value of zero represents that the block is a z ⁇ z weight one matrix that is permutated by the value (or not permutated in the case of a zero).
- a weight one matrix is a matrix in which each row and each column has a hamming weight of one.
- Such matrices include identity matrices and matrices in which the ones are arranged in a more random manner than an identity matrix.
- Each permutation coefficient P j,k that has a value of negative one indicates that the block is a z ⁇ z zero matrix.
- the permutation coefficient P j,k in the first block row and first block column indicates that the corresponding block is a z ⁇ z weight one matrix that is permutated by 3.
- Each weight one matrix may be permutated using, for example, cyclic shifting or permutations that are more random, such as those obtained using an Omega network or a Benes network.
- cyclic shifting cyclic shifting of the weight one matrices may be selected by the designer of the coefficient matrix to be right, left, up, or down cyclic shifting.
- An Omega network which is well known to those of ordinary skill in the art, is a network that receives z inputs and has multiple interconnected stages of switches.
- Each switch which receives two inputs and presents two outputs, can be set based on a bit value to (i) pass the two inputs directly to the two outputs in the order they were received (e.g., top input is provided to top output and bottom input is provided to bottom output) or (ii) swap the two inputs (e.g., such that the top input is provided to the bottom output, and vice versa).
- the outputs of each stage are connected to the inputs of each subsequent stage using a perfect shuffle connection system.
- the connections at each stage are equivalent to dividing z inputs into two equal sets of z/2 inputs and then shuffling the two sets together, with each input from one set alternating with the corresponding input from the other set.
- an Omega network is capable of performing 2 z different permutations, and each permutation coefficient P j,k is represented by (z/2)log 2 (z) bits, each bit corresponding to one switch.
- a Benes network which is also well know to those of ordinary skill in the art, is a network that receives z inputs and has 2 log 2 (z) ⁇ 1 stages of interconnected switches. Each stage has a number (z/2) of 2 ⁇ 2 crossbar switches, and the Benes network has a total number z log 2 (z) ⁇ (z/2) of 2 ⁇ 2 crossbar switches.
- Each switch which receives two inputs and presents two outputs, can be set based on a bit value to (i) pass the two inputs directly to the two outputs in the order they were received (e.g., top input is provided to top output and bottom input is provided to bottom output) or (ii) swap the two inputs (e.g., such that the top input is provided to the bottom output, and vice versa).
- a Benes network is capable of performing 2 z different permutations, and each permutation coefficient P j,k is represented by z log 2 (z) bits, where each bit corresponds to one switch.
- each sub-vector of vector ⁇ right arrow over (x) ⁇ may be calculated by (i) permutating each of the sixteen user-data sub-vectors ⁇ right arrow over (u) ⁇ 1 , . . . , ⁇ right arrow over (u) ⁇ 16 according to the permutation coefficients P j,k in the corresponding block row of H-matrix 300 , and (ii) adding the permutated user-data sub-vectors to one another.
- the first sub-vector ⁇ right arrow over (x) ⁇ 1 may be computed by (i) permutating user-data sub-vectors ⁇ right arrow over (u) ⁇ 1 , . . . , ⁇ right arrow over (u) ⁇ 16 by permutation coefficients P j,k of the first (i.e., top) row of H-matrix 300 as shown in Equation (7) below:
- ⁇ right arrow over (x) ⁇ 1 [ ⁇ right arrow over (u) ⁇ 1 ] 3 +[ ⁇ right arrow over (u) ⁇ 2 ] 0 +[ ⁇ right arrow over (u) ⁇ 3 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 4 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 5 ] 2 +[ ⁇ right arrow over (u) ⁇ 6 ] 0 +[ ⁇ right arrow over (u) ⁇ 7 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 8 ] 3 +[ ⁇ right arrow over (u) ⁇ 9 ] 7 +[ ⁇ right arrow over (u) ⁇ 10 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 11 ] 1 +[ ⁇ right arrow over (u) ⁇ 12 ] 1 +[ ⁇ right arrow over (u) ⁇ 13 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 14 ] ⁇ 1 +[ ⁇ right arrow over (u) ⁇ 15 ] ⁇ 1 +[ ⁇
- each superscripted-number represents a permutation coefficient P j,k .
- user-data sub-vectors ⁇ right arrow over (u) ⁇ 1 and ⁇ right arrow over (u) ⁇ 8 are each permutated by a factor of 3
- user-data sub-vectors ⁇ right arrow over (u) ⁇ 2 and ⁇ right arrow over (u) ⁇ 6 are each permutated by a factor of 0 (i.e., is not permutated)
- user-data sub-vector ⁇ right arrow over (u) ⁇ 5 is permutated by a factor of 2
- user-data sub-vector ⁇ right arrow over (u) ⁇ 9 is permutated by a factor of 7
- user-data sub-vectors 11 and 12 are each permutated by a factor of 1.
- user-data sub-vectors ⁇ right arrow over (u) ⁇ 3 , ⁇ right arrow over (u) ⁇ 4 , ⁇ right arrow over (u) ⁇ 7 , ⁇ right arrow over (u) ⁇ 10 , ⁇ right arrow over (u) ⁇ 13 , ⁇ right arrow over (u) ⁇ 14 , ⁇ right arrow over (u) ⁇ 15 , and ⁇ right arrow over (u) ⁇ 16 each have a permutation coefficient of ⁇ 1, representing that the elements of these user-data sub-vectors are set to zero.
- Sub-vectors ⁇ right arrow over (x) ⁇ 2 , . . . , ⁇ right arrow over (x) ⁇ 8 may be generated in a similar manner based on the permutation coefficients P j,k of rows two through eight of user-data sub-matrix H u of H-matrix 300 , respectively.
- FIG. 4 shows a simplified block diagram of a sparse-matrix-vector multiplication (SMVM) component 400 according to one embodiment of the present invention.
- SMVM sparse-matrix-vector multiplication
- sparse-matrix-vector multiplication component 400 may be configured to operate with an H-matrix other than H-matrix 300 of FIG. 3 , such that sparse-matrix-vector multiplication component 400 receives the same or a different number of user-data sub-vectors ⁇ right arrow over (u) ⁇ k and outputs the same or a different number of sub-vectors ⁇ right arrow over (x) ⁇ j .
- sparse-matrix-vector multiplication (SMVM) component 400 updates the eight sub-vectors ⁇ right arrow over (x) ⁇ 1 , . . . , ⁇ right arrow over (x) ⁇ 8 as the user-data sub-vectors are received. For example, suppose that sparse-matrix-vector multiplication component 400 receives user-data sub-vector ⁇ right arrow over (u) ⁇ 1 corresponding to the first (i.e., left-most) block column of H-matrix 300 .
- each of the permutation coefficients P j,k in the second, third, fourth, sixth, and eighth block rows of first block column has a value of ⁇ 1, indicating that each permutation coefficient P j,k corresponds to a block that is a zero matrix.
- Sub-vectors ⁇ right arrow over (x) ⁇ 2 , ⁇ right arrow over (x) ⁇ 3 , ⁇ right arrow over (x) ⁇ 4 , ⁇ right arrow over (x) ⁇ 6 , and ⁇ right arrow over (x) ⁇ 8 which correspond to the second, third, fourth, sixth, and eighth block rows, respectively, are updated; however, since each permutation coefficient P j,k has a value of ⁇ 1, the value of each sub-vector, ⁇ right arrow over (x) ⁇ 2 , ⁇ right arrow over (x) ⁇ 3 , ⁇ right arrow over (x) ⁇ 4 , ⁇ right arrow over (x) ⁇ 6 , and ⁇ right arrow over (x) ⁇ 8 is unchanged.
- permuter 402 Upon receiving user-data sub-vector ⁇ right arrow over (u) ⁇ 1 permuter 402 permutates user-data sub-vector ⁇ right arrow over (u) ⁇ 1 by a permutation coefficient P j,k of 3 (i.e., the permutation coefficient P j,k in the first block column and first block row of H-matrix 300 ), which is received from coefficient-matrix (CM) memory 404 , which may be implemented, for example, as read-only memory (ROM).
- CM coefficient-matrix
- Permuter 402 may implement cyclic shifting, or permutations that are more random, such as those obtained using an Omega network or a Benes network described above, depending on the implementation of H-matrix 300 .
- the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 3 is provided to XOR array 406 , which comprises z XOR gates, such that each XOR gate receives a different one of the z elements of the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 3 .
- Vector ⁇ right arrow over (x) ⁇ 1 which is initialized to zero, is also provided to XOR array 406 , such that each XOR gate receives a different one of the z elements of vector ⁇ right arrow over (x) ⁇ 1 .
- Each XOR gate of XOR array 406 performs exclusive disjunction (i.e., the XOR logic operation) on the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 3 element and vector ⁇ right arrow over (x) ⁇ 1 element that it receives, and XOR array 406 outputs updated vector ⁇ right arrow over (x) ⁇ 1 ′ to memory 408 , where the updated vector ⁇ right arrow over (x) ⁇ 1 ′ is subsequently stored.
- exclusive disjunction i.e., the XOR logic operation
- permuter 402 permutates user-data sub-vector ⁇ right arrow over (u) ⁇ 1 by a permutation coefficient P j,k of 20 (i.e., the permutation coefficient P j,k in the first block column and the fifth block row of H-matrix 300 ), which is received from coefficient-matrix memory 404 .
- the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 20 is provided to XOR array 406 , such that each XOR gate receives a different one of the z elements of the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 20 .
- Vector ⁇ right arrow over (x) ⁇ 7 which is initialized to zero, is also provided to XOR array 406 , such that each XOR gate receives a different one of the z elements of vector ⁇ right arrow over (x) ⁇ 7 .
- Each XOR gate of XOR array 406 performs exclusive disjunction on the permuted user-data sub-vector [ ⁇ right arrow over (u) ⁇ 1 ] 35 element and vector ⁇ right arrow over (x) ⁇ 7 element that it receives, and XOR array 406 outputs updated vector ⁇ right arrow over (x) ⁇ 7 ′ to memory 408 , where the updated vector ⁇ right arrow over (x) ⁇ 7 ′ is subsequently stored.
- a sparse-matrix-vector multiplication component may comprise a buffer for storing all sixteen user-data sub-vectors ⁇ right arrow over (u) ⁇ 1 , . . . , ⁇ right arrow over (u) ⁇ 16 and may update the eight sub-vectors ⁇ right arrow over (x) ⁇ 1 , . . .
- the sparse-matrix-vector multiplication component may have one XOR array that is used to sequentially update the eight sub-vectors ⁇ right arrow over (x) ⁇ 1 , . . . , ⁇ right arrow over (x) ⁇ 8 in a time-multipliexed manner.
- a sparse-matrix-vector multiplication component that updates the eight sub-vectors ⁇ right arrow over (x) ⁇ 1 , . . . , ⁇ right arrow over (x) ⁇ 8 in this manner may have a higher latency than sparse-matrix-vector multiplication component 400 .
- the inverse [H p ] ⁇ 1 of parity-bit sub-matrix H p may be stored in memory.
- the inverse [H p ] ⁇ 1 of parity-bit sub-matrix H p typically will not be sparse, and as a result, a relatively large amount of memory is needed to store the inverse [H p ] ⁇ 1 of parity-bit sub-matrix H p .
- the user-data sub-matrix H u is divided into an (m ⁇ g) ⁇ (n ⁇ m) sub-matrix A and a g ⁇ (n ⁇ m) sub-matrix C.
- the parity-bit sub-matrix H p is divided into an (m ⁇ g) ⁇ g sub-matrix B, a g ⁇ g sub-matrix D, an (m ⁇ g) ⁇ (m ⁇ g) sub-matrix T, and a g ⁇ (m ⁇ g) sub-matrix E.
- Sub-matrix T is arranged in lower triangular form where all elements of the sub-matrix positioned above the diagonal have a value of zero.
- H-matrix 500 is referred to as approximately lower triangular because lower triangular sub-matrix T is above sub-matrix E, which is not in lower triangular form.
- Equation (2) Based on the structure of H-matrix 500 , and by dividing parity-bit vector ⁇ right arrow over (p) ⁇ into a first sub-vector ⁇ right arrow over (p) ⁇ 1 having length g and a second sub-vector ⁇ right arrow over (p) ⁇ 2 having length m ⁇ g, Equation (2) can be rewritten as shown in Equation (8):
- Equation (9) eliminates sub-matrix E from the lower right hand corner of parity-sub-matrix H p and results in Equation (10) below:
- ⁇ right arrow over (p) ⁇ 1 ⁇ F ⁇ 1 ( ⁇ ET ⁇ 1 A ⁇ right arrow over (u) ⁇ +C ⁇ right arrow over (u) ⁇ ) (11)
- FIG. 6 shows a simplified block diagram of a signal processing device 600 according to one embodiment of the present invention.
- Signal processing device 600 upstream processing 602 , multiplexer 606 , and downstream processing 608 , which may perform processing similar to that of the analogous components of signal processing device 200 of FIG. 2 .
- signal processing device 600 has LDPC encoder 604 , which generates parity-bit vector ⁇ right arrow over (p) ⁇ based on Equations (11) and (12) above.
- LDPC encoder 604 has first parity-bit sub-vector component 610 , which receives user data vector ⁇ right arrow over (u) ⁇ and generates a first parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 using Equation (11).
- FIG. 7 shows a simplified block diagram of a first parity-bit sub-vector component 700 according to one embodiment of the present invention that may be used to implement first parity-bit sub-vector component 610 in FIG. 6 .
- Parity-bit vector component 700 receives user-data vector ⁇ right arrow over (u) ⁇ from, for example, upstream processing such as upstream processing 602 of FIG. 6 , and generates first parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 shown in Equation (11).
- SMVM sparse-matrix-vector multiplication
- sparse-matrix-vector multiplication components 702 and 706 each generate a sub-vector of vector ⁇ right arrow over (x) ⁇ .
- sparse-matrix-vector multiplication component 702 receives permutation coefficients corresponding to sub-matrix A of H-matrix 500 of FIG. 5 from coefficient-matrix memory 704 , which may be implemented as ROM, and generates sub-vector ⁇ right arrow over (x) ⁇ A shown in Equation (13) below:
- Sub-vector ⁇ right arrow over (x) ⁇ A is then provided to forward substitution component 710 .
- Sparse-matrix-vector multiplication component 706 receives permutation coefficients corresponding to sub-matrix C of H-matrix 500 from coefficient-matrix memory 712 , which may also be implemented as ROM, and generates sub-vector ⁇ right arrow over (x) ⁇ C shown in Equation (14) below:
- Sub-vector ⁇ right arrow over (x) ⁇ C is then provided to XOR array 718 , which is discussed further below.
- FIG. 8 shows a simplified block diagram of forward substitution component 710 of FIG. 7 according to one embodiment of the present invention.
- forward substitution component 710 uses a forward substitution technique to generate vector ⁇ right arrow over (w) ⁇ shown in Equation (15) below:
- Sub-matrix T which is lower triangular, has five block columns and five block rows, and is in coefficient-matrix format, where (i) each element T(j,k) is a permutation coefficient of a z ⁇ z weight one matrix and (ii) each negative element (i.e., ⁇ 1) corresponds to a z ⁇ z zero matrix.
- Each weight one matrix may be permutated using, for example, cyclic shifting or permutations that are more random, such as those obtained using an Omega network or a Benes network. In the case of cyclic shifting, cyclic shifting of the weight one matrices may be selected by the designer of the coefficient matrix to be right, left, up, or down cyclic shifting.
- this computation may be computationally intensive and involves the storing of all of the elements of sub-matrix T.
- a forward substitution technique may be used as described below.
- the forward substitution technique may be combined with a permutation scheme that allows for the storing of only the 25 permutation coefficients, rather than all z ⁇ z ⁇ 25 elements of sub-matrix T.
- each sub-vector ⁇ right arrow over (w) ⁇ j may be generated as follows in Equation (17):
- each sub-vector of vector ⁇ right arrow over (w) ⁇ j may be calculated by permutating sub-vectors ⁇ right arrow over (w) ⁇ j according to the permutation coefficients of sub-matrix T.
- Equation (17) based on Equation (17) and the permutation coefficients of exemplary sub-matrix T of Equation (16), sub-vectors ⁇ right arrow over (w) ⁇ 1 , . . . , ⁇ right arrow over (w) ⁇ 5 may be represented by Equations (18) through (22):
- ⁇ right arrow over (w) ⁇ 3 [ ⁇ right arrow over (x) ⁇ A,3 ⁇ [ ⁇ right arrow over (w) ⁇ 1 T ( 3 , 1 ) + ⁇ right arrow over (w) ⁇ 2 T(3,2) ]] ⁇ T ( 3 , 3 ) (20)
- ⁇ right arrow over (w) ⁇ 4 [ ⁇ right arrow over (x) ⁇ A,4 ⁇ [ ⁇ right arrow over (w) ⁇ 1 T(4,1) + ⁇ right arrow over (w) ⁇ 2 T(4,2) +w 3 T(4,3) ]] ⁇ T(4,4) (21)
- ⁇ right arrow over (w) ⁇ 5 [ ⁇ right arrow over (x) ⁇ A,5 ⁇ [ ⁇ right arrow over (w) ⁇ 1 T ( 5 , 1 ) + ⁇ right arrow over (w) ⁇ 2 T ( 5 , 2 ) + ⁇ right arrow over (w) ⁇ 3 T ( 5 , 3 ) + ⁇ right arrow over (w) ⁇ 4 T(5,4) ]] ⁇ T ( 5 , 5 ) (22)
- forward substitution component 710 is shown as receiving five sub-vectors ⁇ right arrow over (x) ⁇ A,1 , . . . , ⁇ right arrow over (x) ⁇ A,5 and outputting five sub-vectors ⁇ right arrow over (w) ⁇ 1 , . . . , ⁇ right arrow over (w) ⁇ 5 .
- forward substitution component 710 may be configured to operate with a sub-matrix T other than the sub-matrix T illustrated in Equation (16), such that forward substitution component 710 receives the same or a different number of sub-vectors ⁇ right arrow over (x) ⁇ A,j , and outputs the same or a different number of sub-vectors ⁇ right arrow over (w) ⁇ j .
- XOR array 804 Upon receiving sub-vector ⁇ right arrow over (x) ⁇ A,1 , XOR array 804 provides sub-vector ⁇ right arrow over (x) ⁇ A,1 to reverse permuter 806 . XOR array 804 may output sub-vector ⁇ right arrow over (x) ⁇ A,1 without performing any processing or XOR array 804 may apply exclusive disjunction to (i) sub-vector ⁇ right arrow over (x) ⁇ A,1 and (ii) an initialized vector having a value of zero, resulting in no change to sub-vector ⁇ right arrow over (x) ⁇ A,1 .
- Sub-vector ⁇ right arrow over (x) ⁇ A,1 is then permutated according to the negative of permutation coefficient T(1,1) received from coefficient-matrix memory 712 as shown in Equation (18).
- permuter 802 and reverse permuter 806 may implement cyclic shifting, or permutations that are more random, such as those obtained using an Omega network or a Benes network described above, depending on the implementation of sub-matrix T in Equation (16).
- cyclic shifting to obtain negative shifts (i.e., ⁇ T(1,1)), reverse permuter 806 performs cyclic shifting in the opposite direction of permuter 802 .
- permuter 802 performs right cyclic shifting
- reverse permuter 806 performs left cyclic shifting.
- the permuted sub-vector ⁇ right arrow over (x) ⁇ A,1 is then stored in coefficient-matrix memory 808 as sub-vector ⁇ right arrow over (w) ⁇ 1 .
- memory 808 To generate sub-vector ⁇ right arrow over (w) ⁇ 2 , memory 808 provides sub-vector ⁇ right arrow over (w) ⁇ 1 to permuter 802 , which permutates sub-vector ⁇ right arrow over (w) ⁇ 1 by permutation coefficient T(2,1) received from coefficient-matrix memory 712 as shown in Equation (19).
- XOR array 804 applies exclusive disjunction to (i) sub-vector ⁇ right arrow over (x) ⁇ A,2 and (ii) the permuted sub-vector ⁇ right arrow over (w) ⁇ 1 T(2,1) , and the output of XOR array 804 is permutated by the negative of permutation coefficient T(2,2) received from coefficient-matrix memory 712 as shown in Equation (19).
- the output of reverse permuter 806 is then stored in memory 808 as sub-vector ⁇ right arrow over (w) ⁇ 2 .
- memory 808 To generate sub-vector ⁇ right arrow over (w) ⁇ 3 , memory 808 provides sub-vectors ⁇ right arrow over (w) ⁇ 1 and ⁇ right arrow over (w) ⁇ 2 to permuter 802 , which permutates the vectors by permutation coefficients T(3,1) and T(3,2), respectively as shown in Equation (20).
- XOR array 804 applies exclusive disjunction to (i) permuted sub-vector ⁇ right arrow over (w) ⁇ 1 T(3,1) , (ii) permuted sub-vector ⁇ right arrow over (w) ⁇ 2 T(3,2) , and (iii) sub-vector ⁇ right arrow over (x) ⁇ A,3 .
- the output of XOR array 804 is permutated by the negative of permutation coefficient T(3,3) received from coefficient-matrix memory 712 as shown in Equation (20).
- the output of reverse permuter 806 is then stored in memory 808 as sub-vector ⁇ right arrow over (w) ⁇ 3 .
- This process is continued using sub-vectors ⁇ right arrow over (w) ⁇ 1 , ⁇ right arrow over (w) ⁇ 2 , and ⁇ right arrow over (w) ⁇ 3 to generate sub-vector ⁇ right arrow over (w) ⁇ 4 and using sub-vectors ⁇ right arrow over (w) ⁇ 1 , ⁇ right arrow over (w) ⁇ 2 , ⁇ right arrow over (w) ⁇ 3 , and ⁇ right arrow over (w) ⁇ 4 to generate sub-vector ⁇ right arrow over (w) ⁇ 5 .
- the present invention may also be applied to backward substitution for upper-triangular matrices.
- such embodiments may solve the equations at the bottom and substitute the results into rows above (i.e., backward substitution).
- FIG. 8 is used for backward substitution.
- Sub-vectors ⁇ right arrow over (w) ⁇ 1 , . . . , ⁇ right arrow over (w) ⁇ 5 may be determined beginning with sub-vector ⁇ right arrow over (w) ⁇ 5 and ending with sub-vector ⁇ right arrow over (w) ⁇ 1 .
- Sub-vector ⁇ right arrow over (w) ⁇ 5 may be determined based on (i) permutation coefficients from the fifth row of an upper-triangular sub-matrix T (not shown) and (ii) fifth input sub-vector ⁇ right arrow over (x) ⁇ A,5 .
- Sub-vector ⁇ right arrow over (w) ⁇ 4 may be determined based on (i) permutation coefficients from the fourth row of an upper-triangular sub-matrix T, (ii) sub-vector ⁇ right arrow over (w) ⁇ 5 , and (iii) fourth input sub-vector ⁇ right arrow over (x) ⁇ A,4 , and so forth.
- forward substitution component 710 outputs vector ⁇ right arrow over (w) ⁇ , comprising sub-vectors ⁇ right arrow over (w) ⁇ 1 , . . . , ⁇ right arrow over (w) ⁇ 5 to sparse-matrix-vector multiplication component 714 .
- Sparse-matrix-vector multiplication component 714 receives permutation coefficients corresponding to sub-matrix E of H-matrix 500 of FIG. 5 from memory 716 , which may be implemented as ROM, and generates vector ⁇ right arrow over (q) ⁇ as shown in Equation (23) below:
- Sparse-matrix-vector multiplication component 714 may be implemented in a manner similar to that described above in relation to sparse-matrix-vector multiplication component 400 of FIG. 4 or in an alternative manner such as those described above in relation to sparse-matrix-vector multiplication component 400 . However, rather than receiving that user-data vector ⁇ right arrow over (u) ⁇ and generating vector ⁇ right arrow over (x) ⁇ like sparse-matrix-vector multiplication component 400 , sparse-matrix-vector multiplication component 714 receives vector ⁇ right arrow over (w) ⁇ and generates vector ⁇ right arrow over (q) ⁇ .
- Vector ⁇ right arrow over (q) ⁇ is provided to XOR array 718 along with vector ⁇ right arrow over (x) ⁇ C , and XOR array 718 performs exclusive disjunction on vectors ⁇ right arrow over (q) ⁇ and ⁇ right arrow over (x) ⁇ C to generate vectors ⁇ right arrow over (s) ⁇ as shown in Equation (24) below:
- Vector ⁇ right arrow over (s) ⁇ is then output to matrix-vector multiplication (MVM) component 720 .
- Matrix-vector multiplication (MVM) component 720 receives elements of matrix ⁇ F ⁇ 1 and performs matrix-vector multiplication to generate first parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 shown in Equation (25):
- sub-matrix ⁇ F ⁇ 1 may be pre-computed and stored in memory 722 , which may be implemented as ROM. Note that, unlike coefficient-matrix memories 704 , 708 , 712 , and 716 , which store only permutation coefficients, memory 716 stores all of the elements of sub-matrix ⁇ F ⁇ 1 .
- FIG. 9 shows a simplified block diagram of matrix-vector multiplication component 720 according to one embodiment of the present invention.
- Matrix-vector multiplication component 720 has AND gate array 902 which applies logical conjunction (i.e., AND logic operation) to (i) vector ⁇ right arrow over (s) ⁇ , received from, for example, XOR array 718 of FIG. 7 , and (ii) the elements of matrix ⁇ F ⁇ 1 , received from memory 722 .
- the outputs of AND gate array 902 are then applied to XOR array 904 , which performs exclusive disjunction on the outputs to generate the elements of first parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 .
- matrix-vector multiplication component 720 To further understand the operations of matrix-vector multiplication component 720 , consider the following simplified example. Suppose that matrix ⁇ F ⁇ 1 and vector ⁇ right arrow over (s) ⁇ have the values shown in Equations (26) and (27), respectively, below:
- parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 [1,0].
- FIG. 10 shows a simplified block diagram of a second parity-bit sub-vector component 1000 according to one embodiment of the present invention that may be used to implement second parity-bit sub-vector component 612 in FIG. 6 .
- Parity-bit sub-vector component 1000 receives (i) first parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 from, for example, parity-bit vector component 700 of FIG. 7 , and (ii) sub-vector ⁇ right arrow over (x) ⁇ A , and generates second parity-bit sub-vector ⁇ right arrow over (p) ⁇ 2 shown in Equation (12).
- Sub-vector ⁇ right arrow over (x) ⁇ A may be received from, for example, sparse-matrix-vector multiplication (SMVM) component 702 in FIG. 7 , or second parity-bit sub-vector component 1000 may generate sub-vector ⁇ right arrow over (x) ⁇ A using its own sparse-matrix-vector multiplication component (not shown) that is similar to sparse-matrix-vector multiplication (SMVM) component 702 .
- SMVM sparse-matrix-vector multiplication
- First parity-bit sub-vector ⁇ right arrow over (p) ⁇ 1 is processed by sparse-matrix-vector multiplication (SMVM) component 1002 , which may be implemented in a manner similar to that of sparse-matrix-vector multiplication component 400 of FIG. 4 or in an alternative manner such as those described above in relation to sparse-matrix-vector multiplication component 400 .
- SMVM sparse-matrix-vector multiplication
- sparse-matrix-vector multiplication component 1002 receives permutation coefficients corresponding to sub-matrix B of H-matrix 500 of FIG. 5 from memory 1004 , which may be implemented as ROM, and generates vector ⁇ right arrow over (v) ⁇ shown in Equation (30) below:
- Vector ⁇ right arrow over (v) ⁇ is provided to XOR array 1006 along with vector ⁇ right arrow over (x) ⁇ A , and XOR array 1006 performs exclusive disjunction on vectors ⁇ right arrow over (v) ⁇ and ⁇ right arrow over (x) ⁇ A to generate vector ⁇ right arrow over (o) ⁇ as shown in Equation (31):
- Forward substitution component 1008 receives (i) permutation coefficients corresponding to sub-matrix T of H-matrix 500 of FIG. 5 from memory 1010 , which may be implemented as ROM, and (ii) vector ⁇ right arrow over (o) ⁇ , and generates second parity-sub-vector vector ⁇ right arrow over (p) ⁇ 2 shown in Equation (32) below:
- Forward substitution component 1008 may be implemented in a manner similar to forward substitution component 710 of FIG. 8 , albeit, receiving vector ⁇ right arrow over (o) ⁇ rather than vector ⁇ right arrow over (x) ⁇ A , and outputting second parity-sub-vector vector ⁇ right arrow over (p) ⁇ 2 rather than vector ⁇ right arrow over (w) ⁇ .
- H-matrices e.g., 100 , 300
- present invention is not so limited.
- the present invention may be implemented for various H-matrices that are the same size as or a different size from these exemplary matrices.
- the present invention may be implemented for H-matrices in which the numbers of columns, block columns, rows, block rows, and messages processed per clock cycle, the sizes of the sub-matrices, the sizes of the column and/or row hamming weights differ from that of H-matrices 100 and 300 .
- Such H-matrices may be, for example, cyclic, quasi-cyclic, non-cyclic, regular, or irregular H-matrices.
- Embodiments of the present invention have been described in the context of LDPC codes, the present invention is not so limited. Embodiments of the present invention could be implemented for any code, including error-correction codes, that can be defined by a graph, e.g., tornado codes and structured IRA codes, since graph-defined codes suffer from trapping sets.
- error-correction codes that can be defined by a graph, e.g., tornado codes and structured IRA codes, since graph-defined codes suffer from trapping sets.
- circuits including possible implementation as a single integrated circuit, a multi-chip module, a single card, or a multi-card circuit pack
- present invention is not so limited.
- various functions of circuit elements may also be implemented as processing blocks in a software program.
- Such software may be employed in, for example, a digital signal processor, micro-controller, or general purpose computer.
- the present invention can be embodied in the form of methods and apparatuses for practicing those methods.
- the present invention can also be embodied in the form of program code embodied in tangible media, such as magnetic recording media, optical recording media, solid state memory, floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
- the present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
- program code When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.
- the present invention can also be embodied in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus of the present invention.
- each numerical value and range should be interpreted as being approximate as if the word “about” or “approximately” preceded the value of the value or range.
- figure numbers and/or figure reference labels in the claims is intended to identify one or more possible embodiments of the claimed subject matter in order to facilitate the interpretation of the claims. Such use is not to be construed as necessarily limiting the scope of those claims to the embodiments shown in the corresponding figures.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Probability & Statistics with Applications (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
- This application claims the benefit of the filing dates of U.S. provisional application No. 61/265,826, filed on Dec. 2, 2009 as attorney docket no. 08-0714US1PROV, and U.S. provisional application No. 61/265,836, filed on Dec. 2, 2009 as attorney docket no. 08-0714US2PROV, the teachings both of which are incorporated herein by reference in their entirety.
- The subject matter of this application is related to:
- U.S. patent application Ser. No. 12/113,729 filed May 1, 2008,
- U.S. patent application Ser. No. 12/113,755 filed May 1, 2008,
- U.S. patent application Ser. No. 12/323,626 filed Nov. 26, 2008,
- U.S. patent application Ser. No. 12/401,116 filed Mar. 10, 2009,
- PCT patent application no. PCT/US08/86523 filed Dec. 12, 2008,
- PCT patent application no. PCT/US08/86537 filed Dec. 12, 2008,
- PCT patent application no. PCT/US09/39918 filed Apr. 8, 2009,
- PCT application no. PCT/US09/39279 filed on Apr. 2, 2009,
- U.S. patent application Ser. No. 12/420,535 filed Apr. 8, 2009,
- U.S. patent application Ser. No. 12/475,786 filed Jun. 1, 2009,
- U.S. patent application Ser. No. 12/260,608 filed on Oct. 29, 2008,
- PCT patent application no. PCT/US09/41215 filed on Apr. 21, 2009,
- U.S. patent application Ser. No. 12/427,786 filed on Apr. 22, 2009,
- U.S. patent application Ser. No. 12/492,328 filed on Jun. 26, 2009,
- U.S. patent application Ser. No. 12/492,346 filed on Jun. 26, 2009,
- U.S. patent application Ser. No. 12/492,357 filed on Jun. 26, 2009,
- U.S. patent application Ser. No. 12/492,374 filed on Jun. 26, 2009,
- U.S. patent application Ser. No. 12/538,915 filed on Aug. 11, 2009,
- U.S. patent application Ser. No. 12/540,078 filed on Aug. 12, 2009,
- U.S. patent application Ser. No. 12/540,035 filed on Aug. 12, 2009,
- U.S. patent application Ser. No. 12/540,002 filed on Aug. 12, 2009,
- U.S. patent application Ser. No. 12/510,639 filed on Jul. 28, 2009,
- U.S. patent application Ser. No. 12/524,418 filed on Jul. 24, 2009,
- U.S. patent application Ser. No. 12/510,722 filed on Jul. 28, 2009, and
- U.S. patent application Ser. No. 12/510,667 filed on Jul. 28, 2009,
the teachings of all of which are incorporated herein by reference in their entirety. - 1. Field of the Invention
- The present invention relates to signal processing, and, in particular, to error-correction encoding and decoding techniques such as low-density parity-check (LDPC) encoding and decoding.
- 2. Description of the Related Art
- Low-density parity-check (LDPC) encoding is an error-correction encoding scheme that has attracted significant interest in recent years due in part to its ability to operate near the Shannon limit and its relatively low implementation complexity. LDPC codes are characterized by parity-check matrices, wherein, in each parity-check matrix, the number of elements in the matrix that have a value of one is relatively small in comparison to the number of elements that have a value of zero. Over the last few years, various methods of performing LDPC encoding have been developed. For example, according to one relatively straightforward method, LDPC encoding may be performed by multiplying a generator matrix, derived from the parity-check matrix, by user data to generate LDPC codewords. A discussion of this and other LDPC encoding methods may be found in Richardson, “Efficient Encoding of Low-Density Parity-Check Codes, IEEE Transactions on Information Theory, Vol. 47, No. 2, pgs. 638-656, February 2001, and Thong, “Block LDPC: A Practical LDPC Coding System Design Approach,” IEEE Transactions on Circuits and Systems: Regular Papers, Vol. 52, No. 4, pgs. 766-775, April 2005, the teachings all of which are incorporated herein by reference in their entirety.
- In one embodiment, the present invention is an apparatus comprising a matrix-vector multiplication (MVM) component that generates a product vector based on (i) an input matrix and (ii) an input vector. The MVM component comprises a permuter, memory, and an XOR gate array. The permuter, for each input sub-vector of the input vector, permutates the input sub-vector based on a set of permutation coefficients to generate a set of permuted input sub-vectors. The set of permutation coefficients correspond to a current block column of the input matrix, and each permutation coefficient in the set corresponds to a different permutation of a sub-matrix in the current block column. The memory stores a set of intermediate product sub-vectors corresponding to the product vector. The XOR gate array, for each input sub-vector, performs exclusive disjunction on (i) the set of permuted input sub-vectors and (ii) the set of intermediate product sub-vectors to update the set of intermediate product subvectors. The XOR gate array updates all of the intermediate product sub-vectors in the set based on a current input sub-vector before updating any of the intermediate product sub-vectors in the set based on a subsequent input sub-vector.
- In another embodiment, the present invention is an encoder implemented method for generating a product vector based on (i) an input matrix and (ii) an input vector. The method comprises permutating, for each input sub-vector of the input vector, the input sub-vector based on a set of permutation coefficients to generate a set of permuted input sub-vectors. The set of permutation coefficients correspond to a current block column of the input matrix, and each permutation coefficient in the set corresponds to a different permutation of a sub-matrix in the current block column. A set of intermediate product sub-vectors corresponding to the product vector is stored in memory, and, for each input sub-vector, exclusive disjunction is performed on (i) the set of permuted input sub-vectors and (ii) the set of intermediate product sub-vectors to update the set of intermediate product sub-vectors. All of the intermediate product sub-vectors in the set are updated based on a current input sub-vector before updating any of the intermediate product sub-vectors in the set based on a subsequent input sub-vector.
- Other aspects, features, and advantages of the present invention will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements.
-
FIG. 1 shows one implementation of a parity-check matrix (aka H-matrix) that may be used to implement a low-density parity-check (LDPC) code; -
FIG. 2 shows a simplified block diagram of one implementation of a signal processing device that may be used to encode data using an H-matrix such as the H-matrix ofFIG. 1 ; -
FIG. 3 shows a simplified representation of an exemplary H-matrix in coefficient-matrix form; -
FIG. 4 shows a simplified block diagram of a sparse-matrix-vector (SMV) component according to one embodiment of the present invention; -
FIG. 5 shows a simplified representation of an H-matrix having a parity-bit sub-matrix in approximately lower triangular (ALT) form; -
FIG. 6 shows a simplified block diagram of a signal processing device according to one embodiment of the present invention; -
FIG. 7 shows a simplified block diagram of a first parity-bit sub-vector component according to one embodiment of the present invention that may be used to implement the first parity-bit sub-vector component inFIG. 6 ; -
FIG. 8 shows a simplified block diagram of a forward substitution component according to one embodiment of the present invention; -
FIG. 9 shows a simplified block diagram of a matrix-vector multiplication component according to one embodiment of the present invention; and -
FIG. 10 shows a simplified block diagram of a second parity-bit sub-vector component according to one embodiment of the present invention that may be used to implement the second parity-bit sub-vector component inFIG. 6 . - Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments. The same applies to the term “implementation.”
-
FIG. 1 shows one implementation of a parity-check matrix 100 that may be used to implement a low-density parity-check (LDPC) code. Parity-check matrix 100, commonly referred to as an H-matrix, comprises 72 sub-matrices (or blocks) Bj,k that are arranged in m=6 rows (i.e., block rows) where j=1, . . . , m and n=12 columns (i.e., block columns) where k=1, n. Each sub-matrix Bj,k has a number z of rows and a number z of columns (i.e., each sub-matrix Bj,k is a z×z matrix), and therefore H-matrix 100 has M=m×z total rows and N=n×z total columns. In some relatively simple implementations, z=1 such that H-matrix 100 has M=6 total rows and N=12 total columns. In more complex implementations, z may be greater than 1. For more complex implementations, each sub-matrix may be a zero matrix, an identity matrix, a circulant that is obtained by cyclically shifting an identity matrix, or a matrix in which the rows and columns are arranged in a more-random manner than an identity matrix or circulant. - H-
matrix 100 may be a regular H-matrix or an irregular H-matrix. A regular H-matrix is arranged such that all rows of the H-matrix have the same row hamming weight wr and all columns of the H-matrix have the same column hamming weight wc. A row's hamming weight refers to the number of elements in the row having a value of 1. Similarly, a column's hamming weight refers to the number of elements in the column having a value of 1. An irregular H-matrix is arranged such that the row hamming weight wr of one or more rows differ from the row hamming weight wr of one or more other rows and/or the column hamming weight wc of one or more columns differ from the column hamming weight wc of one or more other columns. - An H-matrix may also be arranged in non-systematic form or systematic form. In non-systematic form, the elements of the H-matrix that correspond to user data are interspersed with the elements of the H-matrix that correspond to parity bits. In systematic form, the H-matrix is arranged such that all elements of the matrix corresponding to user data are separated from all elements of the matrix corresponding to parity bits. H-
matrix 100 is an example of an H-matrix in systematic form. As shown, H-matrix 100 has (i) an m×(n−m) sub-matrix Hu (to the left of the dashed line) corresponding to user data, and (ii) an m×m sub-matrix Hp (to the right of the dashed line) corresponding to parity bits. -
FIG. 2 shows a simplified block diagram of one implementation of asignal processing device 200, which may be used to encode data using an H-matrix such as H-matrix 100 ofFIG. 1 .Signal processing device 200 may be implemented in a communications transmission system, a hard-disk drive (HDD) system, or any other suitable application.Upstream processing 202 ofsignal processing device 200 receives an input data stream from, for example, a user application, and generates a user-data vector {right arrow over (u)} for low-density parity-check (LDPC) encoding. The processing performed byupstream processing 202 may vary from one application to the next and may include processing such as error-detection encoding, run-length encoding, or other suitable processing. -
LDPC encoder 204 generates a parity-bit vector {right arrow over (p)} based on the user-data vector {right arrow over (u)} and a parity-check matrix (i.e., H-matrix) and outputs the parity-bit vector {right arrow over (p)} tomultiplexer 206.Multiplexer 206 receives the user-data vector {right arrow over (u)} and inserts the parity bits of parity-bit vector {right arrow over (p)} among the data bits of user-data vector {right arrow over (u)} to generate a codeword vector {right arrow over (c)}. For example, according to one implementation, one nibble (four bits) of parity data from parity-bit vector {right arrow over (p)} may be output after every ten nibbles (40 bits) of user data from user-data vector {right arrow over (u)}. Generally, the length of codeword vector {right arrow over (c)} is the same as the number of columns of the parity-check matrix. For example, ifLDPC encoder 204 performs encoding based on H-matrix 100 ofFIG. 1 , which has M=6×z total rows, then codeword vector {right arrow over (c)} will have M=6×z total elements. The codeword vector {right arrow over (c)} is then processed bydownstream processing 208, which performs processing such as digital-to-analog conversion, pre-amplification, and possibly other suitable processing depending on the application. - The processing performed by
LDPC encoder 204 to generate parity-bit vector {right arrow over (p)} may be derived beginning with the premise that the modulo-2 product of the H-matrix and the codeword vector {right arrow over (c)} is equal to zero as shown in Equation (1): -
H{right arrow over (c)}=0 (1) - If the H-matrix of Equation (1) is in systematic form, then Equation (1) may be rewritten as shown in Equation (2):
-
- where Hu is an m×(n−m) sub-matrix of H corresponding to user data, Hp is an m×m sub-matrix of H corresponding to parity-check bits, {right arrow over (u)} is an (n−m)×1 user-data vector, and {right arrow over (p)} is an m×1 parity-bit vector.
- Equation (2) may be rewritten as shown in Equation (3):
-
Hp{right arrow over (p)}=Hu{right arrow over (u)}, (3) - and Equation (3) may be solved for parity-bit vector {right arrow over (p)} as shown in Equation (4):
-
{right arrow over (p)}=[H p]−1 [H u {right arrow over (u)}]. (4) - Substituting {right arrow over (x)}=Hu{right arrow over (u)} into Equation (4) yields Equation (5) as follows:
-
{right arrow over (p)}=[H p]−1 {right arrow over (x)}. (5) - Using Equation (5), parity-bit vector {right arrow over (p)} may be generated by (i) multiplying sub-matrix Hu by user-data vector {right arrow over (u)} to generate vector {right arrow over (x)}, (ii) determining the inverse [Hp]−1 of sub-matrix Hp, and (iii) multiplying vector {right arrow over (x)} by [Hp]−1.
- Sparse-Matrix-Vector Multiplication
- Suppose that user-data sub-matrix Hu is sparse. Vector {right arrow over (x)} may be generated by permutating sub-vectors {right arrow over (u)}n of user-data vector {right arrow over (u)} and applying the permutated sub-vectors {right arrow over (u)}n to XOR logic. As an example, consider exemplary H-
matrix 300 ofFIG. 3 . H-matrix 300 is depicted in coefficient-matrix (CM) form, where each element Pj,k of H-matrix 300 corresponds to a block (i.e., a sub-matrix). H-matrix 300 is also arranged in systematic form having an 8×16 user-data sub-matrix Hu and an 8×8 parity-bit sub-matrix Hp. H-matrix 300 has 192 total blocks arranged in n=24 block columns (i.e., j=1, . . . , 24) and m=8 block rows (i.e., k=1, . . . , 8). Each element Pj,k of H-matrix 300, herein referred to as a permutation coefficient Pj,k, that has a positive value or a value of zero represents that the block is a z×z weight one matrix that is permutated by the value (or not permutated in the case of a zero). A weight one matrix is a matrix in which each row and each column has a hamming weight of one. Such matrices include identity matrices and matrices in which the ones are arranged in a more random manner than an identity matrix. Each permutation coefficient Pj,k that has a value of negative one indicates that the block is a z×z zero matrix. Thus, for example, the permutation coefficient Pj,k in the first block row and first block column (i.e., upper left-most element of H-matrix 300) indicates that the corresponding block is a z×z weight one matrix that is permutated by 3. - Each weight one matrix may be permutated using, for example, cyclic shifting or permutations that are more random, such as those obtained using an Omega network or a Benes network. In the case of cyclic shifting, cyclic shifting of the weight one matrices may be selected by the designer of the coefficient matrix to be right, left, up, or down cyclic shifting. An Omega network, which is well known to those of ordinary skill in the art, is a network that receives z inputs and has multiple interconnected stages of switches. Each switch, which receives two inputs and presents two outputs, can be set based on a bit value to (i) pass the two inputs directly to the two outputs in the order they were received (e.g., top input is provided to top output and bottom input is provided to bottom output) or (ii) swap the two inputs (e.g., such that the top input is provided to the bottom output, and vice versa). The outputs of each stage are connected to the inputs of each subsequent stage using a perfect shuffle connection system. In other words, the connections at each stage are equivalent to dividing z inputs into two equal sets of z/2 inputs and then shuffling the two sets together, with each input from one set alternating with the corresponding input from the other set. For z inputs, an Omega network is capable of performing 2z different permutations, and each permutation coefficient Pj,k is represented by (z/2)log2(z) bits, each bit corresponding to one switch.
- A Benes network, which is also well know to those of ordinary skill in the art, is a network that receives z inputs and has 2 log2(z)−1 stages of interconnected switches. Each stage has a number (z/2) of 2×2 crossbar switches, and the Benes network has a total number z log2(z)−(z/2) of 2×2 crossbar switches. Each switch, which receives two inputs and presents two outputs, can be set based on a bit value to (i) pass the two inputs directly to the two outputs in the order they were received (e.g., top input is provided to top output and bottom input is provided to bottom output) or (ii) swap the two inputs (e.g., such that the top input is provided to the bottom output, and vice versa). For z inputs, a Benes network is capable of performing 2z different permutations, and each permutation coefficient Pj,k is represented by z log2(z) bits, where each bit corresponds to one switch.
- For H-
matrix 300, vector {right arrow over (x)} may be represented as a set of sub-vectors {right arrow over (x)}j, each sub-vector {right arrow over (x)}j having z elements, and each sub-vector corresponding to one block row of H-matrix 300 (i.e., j=1, . . . , 8), as shown in Equation (6): -
- Rather than multiplying the elements of user-data sub-matrix Hu of H-
matrix 300 by user-data vector {right arrow over (u)}, user-data vector {right arrow over (u)} may be divided into sub-vectors {right arrow over (u)}k, each user-data sub-vector {right arrow over (u)}k corresponding to one block column of the user-data sub-matrix Hu of H-matrix 300 (i.e., 16) and each having z elements. Then, each sub-vector of vector {right arrow over (x)} may be calculated by (i) permutating each of the sixteen user-data sub-vectors {right arrow over (u)}1, . . . , {right arrow over (u)}16 according to the permutation coefficients Pj,k in the corresponding block row of H-matrix 300, and (ii) adding the permutated user-data sub-vectors to one another. For example, the first sub-vector {right arrow over (x)}1 may be computed by (i) permutating user-data sub-vectors {right arrow over (u)}1, . . . , {right arrow over (u)}16 by permutation coefficients Pj,k of the first (i.e., top) row of H-matrix 300 as shown in Equation (7) below: -
{right arrow over (x)} 1 =[{right arrow over (u)} 1]3 +[{right arrow over (u)} 2]0 +[{right arrow over (u)} 3]−1 +[{right arrow over (u)} 4]−1 +[{right arrow over (u)} 5]2 +[{right arrow over (u)} 6]0 +[{right arrow over (u)} 7]−1 +[{right arrow over (u)} 8]3 +[{right arrow over (u)} 9]7 +[{right arrow over (u)} 10]−1 +[{right arrow over (u)} 11]1 +[{right arrow over (u)} 12]1 +[{right arrow over (u)} 13]−1 +[{right arrow over (u)} 14]−1 +[{right arrow over (u)} 15]−1 +[{right arrow over (u)} 16]−1, (7) - where each superscripted-number represents a permutation coefficient Pj,k.
- As shown, user-data sub-vectors {right arrow over (u)}1 and {right arrow over (u)}8 are each permutated by a factor of 3, user-data sub-vectors {right arrow over (u)}2 and {right arrow over (u)}6 are each permutated by a factor of 0 (i.e., is not permutated), user-data sub-vector {right arrow over (u)}5 is permutated by a factor of 2, user-data sub-vector {right arrow over (u)}9 is permutated by a factor of 7, and user-
data sub-vectors matrix 300, respectively. -
FIG. 4 shows a simplified block diagram of a sparse-matrix-vector multiplication (SMVM)component 400 according to one embodiment of the present invention. To continue the example described above in relation to H-matrix 300 ofFIG. 3 , sparse-matrix-vector multiplication component 400 is shown as receiving sixteen user-data sub-vectors {right arrow over (u)}1, . . . , {right arrow over (u)}16 and outputting eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8. According to other embodiments, sparse-matrix-vector multiplication component 400 may be configured to operate with an H-matrix other than H-matrix 300 ofFIG. 3 , such that sparse-matrix-vector multiplication component 400 receives the same or a different number of user-data sub-vectors {right arrow over (u)}k and outputs the same or a different number of sub-vectors {right arrow over (x)}j. - Rather than waiting for all sixteen user-data sub-vectors {right arrow over (u)}1, . . . , {right arrow over (u)}16 to be received, sparse-matrix-vector multiplication (SMVM)
component 400 updates the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 as the user-data sub-vectors are received. For example, suppose that sparse-matrix-vector multiplication component 400 receives user-data sub-vector {right arrow over (u)}1 corresponding to the first (i.e., left-most) block column of H-matrix 300. In the first block column of H-matrix 300, each of the permutation coefficients Pj,k in the first, fifth, and seventh block rows correspond to either zero or a positive number. Sparse-matrix-vector multiplication component 400 updates vectors {right arrow over (x)}1, {right arrow over (x)}5, and {right arrow over (x)}7, which correspond to the first, fifth, and seventh block rows, respectively, one at a time as described below. Further, each of the permutation coefficients Pj,k in the second, third, fourth, sixth, and eighth block rows of first block column has a value of −1, indicating that each permutation coefficient Pj,k corresponds to a block that is a zero matrix. Sub-vectors {right arrow over (x)}2, {right arrow over (x)}3, {right arrow over (x)}4, {right arrow over (x)}6, and {right arrow over (x)}8 which correspond to the second, third, fourth, sixth, and eighth block rows, respectively, are updated; however, since each permutation coefficient Pj,k has a value of −1, the value of each sub-vector, {right arrow over (x)}2, {right arrow over (x)}3, {right arrow over (x)}4, {right arrow over (x)}6, and {right arrow over (x)}8 is unchanged. Note that, the term “updating” as used herein in relation to the sub-vectors {right arrow over (x)}j refers to the processing of permutation coefficients Pj,k that results in the sub-vectors {right arrow over (x)}j being changed, as well as the processing of permutation coefficients Pj,k that results in the sub-vectors {right arrow over (x)}j being unchanged, such as by adding an all-zero vector to the sub-vectors {right arrow over (x)}j or by not adding anything to the sub-vectors {right arrow over (x)}j based on the permutation coefficients Pj,k having a value of negative one. - Upon receiving user-data sub-vector {right arrow over (u)}1 permuter 402 permutates user-data sub-vector {right arrow over (u)}1 by a permutation coefficient Pj,k of 3 (i.e., the permutation coefficient Pj,k in the first block column and first block row of H-matrix 300), which is received from coefficient-matrix (CM)
memory 404, which may be implemented, for example, as read-only memory (ROM).Permuter 402 may implement cyclic shifting, or permutations that are more random, such as those obtained using an Omega network or a Benes network described above, depending on the implementation of H-matrix 300. The permuted user-data sub-vector [{right arrow over (u)}1]3 is provided toXOR array 406, which comprises z XOR gates, such that each XOR gate receives a different one of the z elements of the permuted user-data sub-vector [{right arrow over (u)}1]3. Vector {right arrow over (x)}1, which is initialized to zero, is also provided toXOR array 406, such that each XOR gate receives a different one of the z elements of vector {right arrow over (x)}1. Each XOR gate ofXOR array 406 performs exclusive disjunction (i.e., the XOR logic operation) on the permuted user-data sub-vector [{right arrow over (u)}1]3 element and vector {right arrow over (x)}1 element that it receives, andXOR array 406 outputs updated vector {right arrow over (x)}1′ tomemory 408, where the updated vector {right arrow over (x)}1′ is subsequently stored. - Next,
permuter 402 permutates user-data sub-vector {right arrow over (u)}1 by a permutation coefficient Pj,k of 20 (i.e., the permutation coefficient Pj,k in the first block column and the fifth block row of H-matrix 300), which is received from coefficient-matrix memory 404. The permuted user-data sub-vector [{right arrow over (u)}1]20 is provided toXOR array 406, such that each XOR gate receives a different one of the z elements of the permuted user-data sub-vector [{right arrow over (u)}1]20. Vector {right arrow over (x)}5, which is initialized to zero, is also provided toXOR array 406, such that each XOR gate receives a different one of the z elements of vector {right arrow over (x)}5. Each XOR gate ofXOR array 406 performs exclusive disjunction on the permuted user-data sub-vector [{right arrow over (u)}1]20 element and vector {right arrow over (x)}5 element that it receives, andXOR array 406 outputs updated vector {right arrow over (x)}5′ tomemory 408, where the updated vector {right arrow over (x)}5′ is subsequently stored. - Next,
permuter 402 permutates user-data sub-vector {right arrow over (u)}1 by a permutation coefficient Pj,k of 35 (i.e., the permutation coefficient Pj,k in the first block column and the seventh block row of H-matrix 300), which is received from coefficient-matrix memory 404. The permuted user-data sub-vector [{right arrow over (u)}1]35 is provided toXOR array 406, such that each XOR gate receives a different one of the z elements of the permuted user-data sub-vector [{right arrow over (u)}1]35. Vector {right arrow over (x)}7, which is initialized to zero, is also provided toXOR array 406, such that each XOR gate receives a different one of the z elements of vector {right arrow over (x)}7. Each XOR gate ofXOR array 406 performs exclusive disjunction on the permuted user-data sub-vector [{right arrow over (u)}1]35 element and vector {right arrow over (x)}7 element that it receives, andXOR array 406 outputs updated vector {right arrow over (x)}7′ tomemory 408, where the updated vector {right arrow over (x)}7′ is subsequently stored. This process is performed for user-data sub-vectors {right arrow over (u)}2, . . . , {right arrow over (u)}16. Note, however, that the particular vectors {right arrow over (x)}j updated for each user-data sub-vector {right arrow over (u)}k may vary from one user-data sub-vector {right arrow over (u)}k to the next based on the location of positive- and zero-valued permutation coefficients Pj,k in the user-data matrix Hu of H-matrix 300. Once updating of sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 is complete, sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 are output to downstream processing. - Since the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 are processed by
permuter 402 as they are received, sparse-matrix-vector multiplication component 400 may be implemented such that none of sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 are buffered before being provided topermuter 402. Alternatively, sparse-matrix-vector multiplication component 400 may be implemented such that one or more of sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 are provided topermuter 402 without being buffered. In these embodiments,permuter 402 may begin processing one or more of sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8before all eight sub-vectors are received by sparse-matrix-vector multiplication component 400. - Other implementations of sparse-matrix-vector multiplication components are possible. For example, rather than updating the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 as the user-data sub-vectors {right arrow over (u)}k are received, a sparse-matrix-vector multiplication component may comprise a buffer for storing all sixteen user-data sub-vectors {right arrow over (u)}1, . . . , {right arrow over (u)}16 and may update the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8, either at the same time or one at a time. To update the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 at the same time, the sparse-matrix-vector multiplication component may have eight XOR arrays that operate in parallel. A sparse-matrix-vector multiplication component that uses eight parallel XOR arrays may occupy a greater amount of chip area than sparse-matrix-
vector multiplication component 400. To update the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8, one at a time, the sparse-matrix-vector multiplication component may have one XOR array that is used to sequentially update the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 in a time-multipliexed manner. A sparse-matrix-vector multiplication component that updates the eight sub-vectors {right arrow over (x)}1, . . . , {right arrow over (x)}8 in this manner may have a higher latency than sparse-matrix-vector multiplication component 400. - Calculating the Parity-Bit Vector
- As described above in relation to Equation (5), parity-bit vector {right arrow over (p)} may be generated by (i) generating vector {right arrow over (x)}, (ii) determining the inverse [Hp]−1 of sub-matrix Hp, and (iii) multiplying vector {right arrow over (x)} by [Hp]−1. Vector {right arrow over (x)} may be generated as described in relation to
FIG. 4 . Determining the inverse [Hp]−1 of parity-bit sub-matrix Hp may be performed using software. Once the inverse [Hp]−1 of parity-bit sub-matrix Hp is determined, it may be stored in memory. However, the inverse [Hp]−1 of parity-bit sub-matrix Hp typically will not be sparse, and as a result, a relatively large amount of memory is needed to store the inverse [Hp]−1 of parity-bit sub-matrix Hp. Further, the step of multiplying the inverse [Hp]−1 of parity-bit sub-matrix Hp by vector {right arrow over (x)} may be computationally intensive as a result of the inverse [Hp]−1 of parity-bit sub-matrix Hp not being sparse. To minimize the complexity and memory requirements of steps (ii) and (iii) above, the H-matrix may be arranged into blocks as shown inFIG. 5 , and parity-bit vector {right arrow over (p)} may be determined using a block-wise inversion. -
FIG. 5 shows a simplified representation of an H-matrix 500 having a parity-bit sub-matrix Hp in approximately lower triangular (ALT) form. H-matrix 500 may be obtained by (1) performing pre-processing steps, such as row and column permutations, on an arbitrarily arranged H-matrix, or (2) designing the H-matrix to have the form of H-matrix 500. As shown, H-matrix 500 has an m×(n−m) user-data sub-matrix Hu (to the left of the dashed line) and an m×m parity-bit sub-matrix Hp (to the right of the line). The user-data sub-matrix Hu is divided into an (m−g)×(n−m) sub-matrix A and a g×(n−m) sub-matrix C. The parity-bit sub-matrix Hp is divided into an (m−g)×g sub-matrix B, a g×g sub-matrix D, an (m−g)×(m−g) sub-matrix T, and a g×(m−g) sub-matrix E. Sub-matrix T is arranged in lower triangular form where all elements of the sub-matrix positioned above the diagonal have a value of zero. H-matrix 500 is referred to as approximately lower triangular because lower triangular sub-matrix T is above sub-matrix E, which is not in lower triangular form. - Based on the structure of H-
matrix 500, and by dividing parity-bit vector {right arrow over (p)} into a first sub-vector {right arrow over (p)}1 having length g and a second sub-vector {right arrow over (p)}2 having length m−g, Equation (2) can be rewritten as shown in Equation (8): -
-
- as shown in Equation (9) eliminates sub-matrix E from the lower right hand corner of parity-sub-matrix Hp and results in Equation (10) below:
-
- Substituting F=−ET−1B+D into Equation (10) and solving for first and second parity-bit sub-vectors {right arrow over (p)}1 and {right arrow over (p)}2 results in Equations (11) and (12) below:
-
{right arrow over (p)} 1 =−F −1(−ET −1 A{right arrow over (u)}+C{right arrow over (u)}) (11) -
{right arrow over (p)} 2=−T−1(A{right arrow over (u)}+B{right arrow over (p)}1) (12) -
FIG. 6 shows a simplified block diagram of asignal processing device 600 according to one embodiment of the present invention.Signal processing device 600upstream processing 602,multiplexer 606, anddownstream processing 608, which may perform processing similar to that of the analogous components ofsignal processing device 200 ofFIG. 2 . Further,signal processing device 600 hasLDPC encoder 604, which generates parity-bit vector {right arrow over (p)} based on Equations (11) and (12) above. In particular,LDPC encoder 604 has first parity-bitsub-vector component 610, which receives user data vector {right arrow over (u)} and generates a first parity-bit sub-vector {right arrow over (p)}1 using Equation (11). First parity parity-bit sub-vector {right arrow over (p)}1 is (i) provided to second parity-bitsub-vector component 612 and (ii)memory 614. Second parity-bitsub-vector component 612 generates a second parity-bit sub-vector {right arrow over (p)}2 using Equation (12) and provides the second parity-bit sub-vector {right arrow over (p)}2 tomemory 614.Memory 614 then outputs parity-bit vector {right arrow over (p)}, by appending second parity-bit sub-vector {right arrow over (p)}2 onto the end of first parity-bit sub-vector {right arrow over (p)}1. -
FIG. 7 shows a simplified block diagram of a first parity-bitsub-vector component 700 according to one embodiment of the present invention that may be used to implement first parity-bitsub-vector component 610 inFIG. 6 . Parity-bit vector component 700 receives user-data vector {right arrow over (u)} from, for example, upstream processing such asupstream processing 602 ofFIG. 6 , and generates first parity-bit sub-vector {right arrow over (p)}1 shown in Equation (11). User-data vector {right arrow over (u)} is provided to sparse-matrix-vector multiplication (SMVM)components vector multiplication component 400 ofFIG. 4 or in an alternative manner such as those described above in relation to sparse-matrix-vector multiplication component 400. Note, however, that unlike sparse-matrix-vector multiplication component 400 which calculates the entire vector {right arrow over (x)} by multiplying the entire user-data sub-matrix Hu by the user-data vector {right arrow over (u)} as shown in Equation (6), sparse-matrix-vector multiplication components vector multiplication component 702 receives permutation coefficients corresponding to sub-matrix A of H-matrix 500 ofFIG. 5 from coefficient-matrix memory 704, which may be implemented as ROM, and generates sub-vector {right arrow over (x)}A shown in Equation (13) below: -
{right arrow over (x)}A=A{right arrow over (u)} (13) - Sub-vector {right arrow over (x)}A is then provided to
forward substitution component 710. Sparse-matrix-vector multiplication component 706 receives permutation coefficients corresponding to sub-matrix C of H-matrix 500 from coefficient-matrix memory 712, which may also be implemented as ROM, and generates sub-vector {right arrow over (x)}C shown in Equation (14) below: -
{right arrow over (x)}C=C{right arrow over (u)} (14) - Sub-vector {right arrow over (x)}C is then provided to
XOR array 718, which is discussed further below. -
FIG. 8 shows a simplified block diagram offorward substitution component 710 ofFIG. 7 according to one embodiment of the present invention. In general,forward substitution component 710 uses a forward substitution technique to generate vector {right arrow over (w)} shown in Equation (15) below: -
{right arrow over (w)}=T−1{right arrow over (x)}A=T−1A{right arrow over (u)} (15) - To further understand the forward substitution technique, consider the exemplary sub-matrix T, vector {right arrow over (x)}A, and vector {right arrow over (w)}, which are substituted into Equation (15) as shown in Equation (16) below:
-
- Sub-matrix T, which is lower triangular, has five block columns and five block rows, and is in coefficient-matrix format, where (i) each element T(j,k) is a permutation coefficient of a z×z weight one matrix and (ii) each negative element (i.e., −1) corresponds to a z×z zero matrix. Each weight one matrix may be permutated using, for example, cyclic shifting or permutations that are more random, such as those obtained using an Omega network or a Benes network. In the case of cyclic shifting, cyclic shifting of the weight one matrices may be selected by the designer of the coefficient matrix to be right, left, up, or down cyclic shifting. As shown in Equation (16), using a non-forward substitution method, the elements of the inverse T−1 of sub-matrix T (i.e., all z×z×25 matrix values; not the 25 permutation coefficients) may be multiplied by vector {right arrow over (x)}A, which has five sub-vectors {right arrow over (x)}A, each comprising z elements and j=0, . . . , 5, to generate vector {right arrow over (w)}, which has five sub-vectors {right arrow over (w)}j, each comprising z elements and j=0, . . . , 5. However, this computation may be computationally intensive and involves the storing of all of the elements of sub-matrix T. To reduce computational complexity, a forward substitution technique may be used as described below. Further, to reduce memory requirements, the forward substitution technique may be combined with a permutation scheme that allows for the storing of only the 25 permutation coefficients, rather than all z×z×25 elements of sub-matrix T.
- Forward substitution is performed by computing sub-vector {right arrow over (w)}1, then substituting sub-vector {right arrow over (w)}1 forward into the next equation to solve for sub-vector {right arrow over (w)}2, substituting sub-vectors {right arrow over (w)}1 and {right arrow over (w)}2 forward into the next equation to solve for sub-vector {right arrow over (w)}3, and so forth. Using this forward substitution technique, each sub-vector {right arrow over (w)}j may be generated as follows in Equation (17):
-
- where the symbol ⊕ indicates an XOR operation.
- By using forward substitution, the inverse T−1 sub-matrix T does not need to be computed. Further, as shown in Equation (17), rather than multiplying sub-vectors {right arrow over (x)}A,j by the elements of the inverse T−1 of sub-matrix T to generate {right arrow over (w)}j, each sub-vector of vector {right arrow over (w)}j may be calculated by permutating sub-vectors {right arrow over (w)}j according to the permutation coefficients of sub-matrix T. For example, based on Equation (17) and the permutation coefficients of exemplary sub-matrix T of Equation (16), sub-vectors {right arrow over (w)}1, . . . , {right arrow over (w)}5 may be represented by Equations (18) through (22):
-
{right arrow over (w)} 1 ={right arrow over (x)} A,1 −T(1,1) (18) -
{right arrow over (w)}2[={right arrow over (x)}A,2⊕{right arrow over (w)}1 T(2,1)]−T(2,2) (19) -
{right arrow over (w)} 3 =[{right arrow over (x)} A,3 ⊕[{right arrow over (w)} 1 T(3,1)+{right arrow over (w)} 2 T(3,2)]]−T(3,3) (20) -
{right arrow over (w)}4=[{right arrow over (x)}A,4⊕[{right arrow over (w)}1 T(4,1)+{right arrow over (w)}2 T(4,2)+w3 T(4,3)]]−T(4,4) (21) -
{right arrow over (w)} 5 =[{right arrow over (x)} A,5 ⊕[{right arrow over (w)} 1 T(5,1)+{right arrow over (w)} 2 T(5,2)+{right arrow over (w)} 3 T(5,3)+{right arrow over (w)} 4 T(5,4)]]−T(5,5) (22) - Returning to
FIG. 8 and continuing the example above,forward substitution component 710 is shown as receiving five sub-vectors {right arrow over (x)}A,1, . . . , {right arrow over (x)}A,5 and outputting five sub-vectors {right arrow over (w)}1, . . . , {right arrow over (w)}5. According to other embodiments,forward substitution component 710 may be configured to operate with a sub-matrix T other than the sub-matrix T illustrated in Equation (16), such thatforward substitution component 710 receives the same or a different number of sub-vectors {right arrow over (x)}A,j, and outputs the same or a different number of sub-vectors {right arrow over (w)}j. - Initially, upon receiving sub-vector {right arrow over (x)}A,1,
XOR array 804 provides sub-vector {right arrow over (x)}A,1 to reversepermuter 806.XOR array 804 may output sub-vector {right arrow over (x)}A,1 without performing any processing orXOR array 804 may apply exclusive disjunction to (i) sub-vector {right arrow over (x)}A,1 and (ii) an initialized vector having a value of zero, resulting in no change to sub-vector {right arrow over (x)}A,1. Sub-vector {right arrow over (x)}A,1 is then permutated according to the negative of permutation coefficient T(1,1) received from coefficient-matrix memory 712 as shown in Equation (18). Note that, similar topermuter 402 ofFIG. 4 ,permuter 802 and reversepermuter 806 may implement cyclic shifting, or permutations that are more random, such as those obtained using an Omega network or a Benes network described above, depending on the implementation of sub-matrix T in Equation (16). In the case of cyclic shifting, to obtain negative shifts (i.e., −T(1,1)), reversepermuter 806 performs cyclic shifting in the opposite direction ofpermuter 802. For example, ifpermuter 802 performs right cyclic shifting, then reversepermuter 806 performs left cyclic shifting. The permuted sub-vector {right arrow over (x)}A,1 is then stored in coefficient-matrix memory 808 as sub-vector {right arrow over (w)}1. - To generate sub-vector {right arrow over (w)}2,
memory 808 provides sub-vector {right arrow over (w)}1 topermuter 802, which permutates sub-vector {right arrow over (w)}1 by permutation coefficient T(2,1) received from coefficient-matrix memory 712 as shown in Equation (19).XOR array 804 applies exclusive disjunction to (i) sub-vector {right arrow over (x)}A,2 and (ii) the permuted sub-vector {right arrow over (w)}1 T(2,1), and the output ofXOR array 804 is permutated by the negative of permutation coefficient T(2,2) received from coefficient-matrix memory 712 as shown in Equation (19). The output ofreverse permuter 806 is then stored inmemory 808 as sub-vector {right arrow over (w)}2. To generate sub-vector {right arrow over (w)}3,memory 808 provides sub-vectors {right arrow over (w)}1 and {right arrow over (w)}2 topermuter 802, which permutates the vectors by permutation coefficients T(3,1) and T(3,2), respectively as shown in Equation (20).XOR array 804 applies exclusive disjunction to (i) permuted sub-vector {right arrow over (w)}1 T(3,1), (ii) permuted sub-vector {right arrow over (w)}2 T(3,2), and (iii) sub-vector {right arrow over (x)}A,3. The output ofXOR array 804 is permutated by the negative of permutation coefficient T(3,3) received from coefficient-matrix memory 712 as shown in Equation (20). The output ofreverse permuter 806 is then stored inmemory 808 as sub-vector {right arrow over (w)}3. This process is continued using sub-vectors {right arrow over (w)}1, {right arrow over (w)}2, and {right arrow over (w)}3 to generate sub-vector {right arrow over (w)}4 and using sub-vectors {right arrow over (w)}1, {right arrow over (w)}2, {right arrow over (w)}3, and {right arrow over (w)}4 to generate sub-vector {right arrow over (w)}5. - Note that, according to various embodiments, the present invention may also be applied to backward substitution for upper-triangular matrices. In such embodiments, rather than solving equations (i.e., rows) at the top of the matrix and substituting the results into rows below (i.e., forward substitution), such embodiments may solve the equations at the bottom and substitute the results into rows above (i.e., backward substitution). For example, suppose that
FIG. 8 is used for backward substitution. Sub-vectors {right arrow over (w)}1, . . . , {right arrow over (w)}5 may be determined beginning with sub-vector {right arrow over (w)}5 and ending with sub-vector {right arrow over (w)}1. Sub-vector {right arrow over (w)}5 may be determined based on (i) permutation coefficients from the fifth row of an upper-triangular sub-matrix T (not shown) and (ii) fifth input sub-vector {right arrow over (x)}A,5. Sub-vector {right arrow over (w)}4 may be determined based on (i) permutation coefficients from the fourth row of an upper-triangular sub-matrix T, (ii) sub-vector {right arrow over (w)}5, and (iii) fourth input sub-vector {right arrow over (x)}A,4, and so forth. - Returning to
FIG. 7 ,forward substitution component 710 outputs vector {right arrow over (w)}, comprising sub-vectors {right arrow over (w)}1, . . . , {right arrow over (w)}5 to sparse-matrix-vector multiplication component 714. Sparse-matrix-vector multiplication component 714 receives permutation coefficients corresponding to sub-matrix E of H-matrix 500 ofFIG. 5 frommemory 716, which may be implemented as ROM, and generates vector {right arrow over (q)} as shown in Equation (23) below: -
{right arrow over (q)}=−E{right arrow over (w)}=−ET −1 {right arrow over (x)} A =−ET −1 A{right arrow over (u)} (23) - Sparse-matrix-
vector multiplication component 714 may be implemented in a manner similar to that described above in relation to sparse-matrix-vector multiplication component 400 ofFIG. 4 or in an alternative manner such as those described above in relation to sparse-matrix-vector multiplication component 400. However, rather than receiving that user-data vector {right arrow over (u)} and generating vector {right arrow over (x)} like sparse-matrix-vector multiplication component 400, sparse-matrix-vector multiplication component 714 receives vector {right arrow over (w)} and generates vector {right arrow over (q)}. - Vector {right arrow over (q)} is provided to
XOR array 718 along with vector {right arrow over (x)}C, andXOR array 718 performs exclusive disjunction on vectors {right arrow over (q)} and {right arrow over (x)}C to generate vectors {right arrow over (s)} as shown in Equation (24) below: -
{right arrow over (s)}=−E{right arrow over (w)}+{right arrow over (x)} C =−ET −1 {right arrow over (x)} A +{right arrow over (x)} C32 −ET −1 A{right arrow over (u)}+C{right arrow over (u)} (24) - Vector {right arrow over (s)} is then output to matrix-vector multiplication (MVM)
component 720. Matrix-vector multiplication (MVM)component 720 receives elements of matrix −F−1 and performs matrix-vector multiplication to generate first parity-bit sub-vector {right arrow over (p)}1 shown in Equation (25): -
{right arrow over (p)} 1 =−F −1 {right arrow over (s)}=−F −1(−ET −1 A{right arrow over (u)}+C{right arrow over (u)}) (25) - The elements of sub-matrix −F−1 may be pre-computed and stored in
memory 722, which may be implemented as ROM. Note that, unlike coefficient-matrix memories memory 716 stores all of the elements of sub-matrix −F−1. -
FIG. 9 shows a simplified block diagram of matrix-vector multiplication component 720 according to one embodiment of the present invention. Matrix-vector multiplication component 720 has ANDgate array 902 which applies logical conjunction (i.e., AND logic operation) to (i) vector {right arrow over (s)}, received from, for example,XOR array 718 ofFIG. 7 , and (ii) the elements of matrix −F−1, received frommemory 722. The outputs of ANDgate array 902 are then applied toXOR array 904, which performs exclusive disjunction on the outputs to generate the elements of first parity-bit sub-vector {right arrow over (p)}1. - To further understand the operations of matrix-
vector multiplication component 720, consider the following simplified example. Suppose that matrix −F−1 and vector {right arrow over (s)} have the values shown in Equations (26) and (27), respectively, below: -
- The resulting parity-bit sub-vector {right arrow over (p)}1 has two elements p1[1] and p1[2] (i.e., {right arrow over (p)}1=[p1[1] p1[2]]) that are obtained as follows in Equations (28) and (29), respectively:
-
p1[1]=(1AND1)XOR(0AND0)XOR(0AND0)=1 (28) -
p1[2]=(0AND1)XOR(0AND0)XOR(1AND0)=0 (29) - Thus, according to this simplified example, parity-bit sub-vector {right arrow over (p)}1=[1,0].
-
FIG. 10 shows a simplified block diagram of a second parity-bitsub-vector component 1000 according to one embodiment of the present invention that may be used to implement second parity-bitsub-vector component 612 inFIG. 6 . Parity-bitsub-vector component 1000 receives (i) first parity-bit sub-vector {right arrow over (p)}1 from, for example, parity-bit vector component 700 ofFIG. 7 , and (ii) sub-vector {right arrow over (x)}A, and generates second parity-bit sub-vector {right arrow over (p)}2 shown in Equation (12). Sub-vector {right arrow over (x)}A may be received from, for example, sparse-matrix-vector multiplication (SMVM)component 702 inFIG. 7 , or second parity-bitsub-vector component 1000 may generate sub-vector {right arrow over (x)}A using its own sparse-matrix-vector multiplication component (not shown) that is similar to sparse-matrix-vector multiplication (SMVM)component 702. - First parity-bit sub-vector {right arrow over (p)}1 is processed by sparse-matrix-vector multiplication (SMVM)
component 1002, which may be implemented in a manner similar to that of sparse-matrix-vector multiplication component 400 ofFIG. 4 or in an alternative manner such as those described above in relation to sparse-matrix-vector multiplication component 400. In so doing, sparse-matrix-vector multiplication component 1002 receives permutation coefficients corresponding to sub-matrix B of H-matrix 500 ofFIG. 5 frommemory 1004, which may be implemented as ROM, and generates vector {right arrow over (v)} shown in Equation (30) below: -
{right arrow over (v)}=B{right arrow over (p)}1 (30) - Vector {right arrow over (v)} is provided to
XOR array 1006 along with vector {right arrow over (x)}A, andXOR array 1006 performs exclusive disjunction on vectors {right arrow over (v)} and {right arrow over (x)}A to generate vector {right arrow over (o)} as shown in Equation (31): -
{right arrow over (o)}={right arrow over (v)}⊕{right arrow over (x)}A=A{right arrow over (u)}+B{right arrow over (p)}1 (31) -
Forward substitution component 1008 receives (i) permutation coefficients corresponding to sub-matrix T of H-matrix 500 ofFIG. 5 frommemory 1010, which may be implemented as ROM, and (ii) vector {right arrow over (o)}, and generates second parity-sub-vector vector {right arrow over (p)}2 shown in Equation (32) below: -
{right arrow over (p)} 2 =−T −1 {right arrow over (o)}=−T −1(A{right arrow over (u)}+B{right arrow over (p)} 1) (32) -
Forward substitution component 1008 may be implemented in a manner similar toforward substitution component 710 ofFIG. 8 , albeit, receiving vector {right arrow over (o)} rather than vector {right arrow over (x)}A, and outputting second parity-sub-vector vector {right arrow over (p)}2 rather than vector {right arrow over (w)}. - Although the present invention was described relative to exemplary H-matrices (e.g., 100, 300), the present invention is not so limited. The present invention may be implemented for various H-matrices that are the same size as or a different size from these exemplary matrices. For example, the present invention may be implemented for H-matrices in which the numbers of columns, block columns, rows, block rows, and messages processed per clock cycle, the sizes of the sub-matrices, the sizes of the column and/or row hamming weights differ from that of H-
matrices - It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated in order to explain the nature of this invention may be made by those skilled in the art without departing from the scope of the invention as expressed in the following claims.
- Although embodiments of the present invention have been described in the context of LDPC codes, the present invention is not so limited. Embodiments of the present invention could be implemented for any code, including error-correction codes, that can be defined by a graph, e.g., tornado codes and structured IRA codes, since graph-defined codes suffer from trapping sets.
- While the exemplary embodiments of the present invention have been described with respect to processes of circuits, including possible implementation as a single integrated circuit, a multi-chip module, a single card, or a multi-card circuit pack, the present invention is not so limited. As would be apparent to one skilled in the art, various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, a digital signal processor, micro-controller, or general purpose computer.
- The present invention can be embodied in the form of methods and apparatuses for practicing those methods. The present invention can also be embodied in the form of program code embodied in tangible media, such as magnetic recording media, optical recording media, solid state memory, floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. The present invention can also be embodied in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus of the present invention.
- Unless explicitly stated otherwise, each numerical value and range should be interpreted as being approximate as if the word “about” or “approximately” preceded the value of the value or range.
- The use of figure numbers and/or figure reference labels in the claims is intended to identify one or more possible embodiments of the claimed subject matter in order to facilitate the interpretation of the claims. Such use is not to be construed as necessarily limiting the scope of those claims to the embodiments shown in the corresponding figures.
- It should be understood that the steps of the exemplary methods set forth herein are not necessarily required to be performed in the order described, and the order of the steps of such methods should be understood to be merely exemplary. Likewise, additional steps may be included in such methods, and certain steps may be omitted or combined, in methods consistent with various embodiments of the present invention.
- Although the elements in the following method claims, if any, are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/644,161 US8352847B2 (en) | 2009-12-02 | 2009-12-22 | Matrix vector multiplication for error-correction encoding and the like |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US26582609P | 2009-12-02 | 2009-12-02 | |
US26583609P | 2009-12-02 | 2009-12-02 | |
US12/644,161 US8352847B2 (en) | 2009-12-02 | 2009-12-22 | Matrix vector multiplication for error-correction encoding and the like |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110131462A1 true US20110131462A1 (en) | 2011-06-02 |
US8352847B2 US8352847B2 (en) | 2013-01-08 |
Family
ID=44069765
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/644,181 Expired - Fee Related US8359515B2 (en) | 2009-12-02 | 2009-12-22 | Forward substitution for error-correction encoding and the like |
US12/644,161 Expired - Fee Related US8352847B2 (en) | 2009-12-02 | 2009-12-22 | Matrix vector multiplication for error-correction encoding and the like |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/644,181 Expired - Fee Related US8359515B2 (en) | 2009-12-02 | 2009-12-22 | Forward substitution for error-correction encoding and the like |
Country Status (1)
Country | Link |
---|---|
US (2) | US8359515B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015127426A1 (en) * | 2014-02-24 | 2015-08-27 | Qatar Foundation For Education, Science And Community Development | Apparatus and method for secure communication on a compound channel |
US20160306699A1 (en) * | 2012-04-25 | 2016-10-20 | International Business Machines Corporation | Encrypting data for storage in a dispersed storage network |
WO2018132444A1 (en) * | 2017-01-11 | 2018-07-19 | Groq, Inc. | Error correction in computation |
EP3047575B1 (en) * | 2013-09-19 | 2020-01-01 | u-blox AG | Encoding of multiple different quasi-cyclic low-density parity check (qc-ldpc) codes sharing common hardware resources |
US10621044B2 (en) | 2012-04-25 | 2020-04-14 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US10795766B2 (en) | 2012-04-25 | 2020-10-06 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
Families Citing this family (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8443257B1 (en) | 2010-02-01 | 2013-05-14 | Sk Hynix Memory Solutions Inc. | Rate-scalable, multistage quasi-cyclic LDPC coding |
US8572463B2 (en) * | 2010-02-01 | 2013-10-29 | Sk Hynix Memory Solutions Inc. | Quasi-cyclic LDPC encoding and decoding for non-integer multiples of circulant size |
US8448041B1 (en) * | 2010-02-01 | 2013-05-21 | Sk Hynix Memory Solutions Inc. | Multistage LDPC encoding |
US8504894B1 (en) | 2010-03-04 | 2013-08-06 | Sk Hynix Memory Solutions Inc. | Systematic encoding for non-full row rank, quasi-cyclic LDPC parity check matrices |
US9130638B2 (en) | 2011-05-26 | 2015-09-08 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US9444514B2 (en) | 2010-05-28 | 2016-09-13 | Cohere Technologies, Inc. | OTFS methods of data channel characterization and uses thereof |
US10681568B1 (en) | 2010-05-28 | 2020-06-09 | Cohere Technologies, Inc. | Methods of data channel characterization and uses thereof |
US9071286B2 (en) | 2011-05-26 | 2015-06-30 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US11943089B2 (en) | 2010-05-28 | 2024-03-26 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-shifting communications system |
US9071285B2 (en) * | 2011-05-26 | 2015-06-30 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US8976851B2 (en) | 2011-05-26 | 2015-03-10 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US10667148B1 (en) | 2010-05-28 | 2020-05-26 | Cohere Technologies, Inc. | Methods of operating and implementing wireless communications systems |
US8512383B2 (en) | 2010-06-18 | 2013-08-20 | Spine Wave, Inc. | Method of percutaneously fixing a connecting rod to a spine |
US9294315B2 (en) | 2011-05-26 | 2016-03-22 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US9590779B2 (en) * | 2011-05-26 | 2017-03-07 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US9031141B2 (en) * | 2011-05-26 | 2015-05-12 | Cohere Technologies, Inc. | Modulation and equalization in an orthonormal time-frequency shifting communications system |
US8898538B2 (en) * | 2011-08-24 | 2014-11-25 | Analogies Sa | Construction of multi rate low density parity check convolutional codes |
CN102412844B (en) * | 2011-11-02 | 2014-03-05 | 广州海格通信集团股份有限公司 | Decoding method and decoding device of IRA (irregular repeat-accumulate) series LDPC (low density parity check) codes |
US8683296B2 (en) * | 2011-12-30 | 2014-03-25 | Streamscale, Inc. | Accelerated erasure coding system and method |
US8914706B2 (en) * | 2011-12-30 | 2014-12-16 | Streamscale, Inc. | Using parity data for concurrent data authentication, correction, compression, and encryption |
US8762821B2 (en) * | 2012-03-30 | 2014-06-24 | Intel Corporation | Method of correcting adjacent errors by using BCH-based error correction coding |
US8972835B1 (en) * | 2012-06-06 | 2015-03-03 | Xilinx, Inc. | Encoding and decoding of information using a block code matrix |
US8972833B1 (en) * | 2012-06-06 | 2015-03-03 | Xilinx, Inc. | Encoding and decoding of information using a block code matrix |
US9912507B2 (en) | 2012-06-25 | 2018-03-06 | Cohere Technologies, Inc. | Orthogonal time frequency space communication system compatible with OFDM |
US10469215B2 (en) | 2012-06-25 | 2019-11-05 | Cohere Technologies, Inc. | Orthogonal time frequency space modulation system for the Internet of Things |
US9929783B2 (en) | 2012-06-25 | 2018-03-27 | Cohere Technologies, Inc. | Orthogonal time frequency space modulation system |
US10411843B2 (en) | 2012-06-25 | 2019-09-10 | Cohere Technologies, Inc. | Orthogonal time frequency space communication system compatible with OFDM |
US10090972B2 (en) | 2012-06-25 | 2018-10-02 | Cohere Technologies, Inc. | System and method for two-dimensional equalization in an orthogonal time frequency space communication system |
US9967758B2 (en) | 2012-06-25 | 2018-05-08 | Cohere Technologies, Inc. | Multiple access in an orthogonal time frequency space communication system |
US10003487B2 (en) | 2013-03-15 | 2018-06-19 | Cohere Technologies, Inc. | Symplectic orthogonal time frequency space modulation system |
US9705532B2 (en) * | 2013-03-15 | 2017-07-11 | Arris Enterprises Llc | Parallel low-density parity check (LDPC) accumulation |
CN103150229B (en) * | 2013-03-21 | 2016-05-18 | 上海第二工业大学 | A kind of non-rule low density parity check code Optimization Design rapidly and efficiently |
CN103236848B (en) * | 2013-04-08 | 2016-04-27 | 上海第二工业大学 | Specific code length non-rule low density parity check code optimal-design method |
KR102098202B1 (en) * | 2014-05-15 | 2020-04-07 | 삼성전자주식회사 | Encoding apparatus and encoding method thereof |
CN105322971B (en) * | 2014-07-23 | 2019-02-26 | 上海数字电视国家工程研究中心有限公司 | For the LDPC code word of next-generation radio broadcasting and coding method and codec |
CN105281784B (en) * | 2014-07-23 | 2018-12-18 | 上海数字电视国家工程研究中心有限公司 | For the LDPC code word of next-generation radio broadcasting and coding method and codec |
US9722632B2 (en) | 2014-09-22 | 2017-08-01 | Streamscale, Inc. | Sliding window list decoder for error correcting codes |
US9733870B2 (en) | 2015-05-06 | 2017-08-15 | International Business Machines Corporation | Error vector readout from a memory device |
US10158394B2 (en) | 2015-05-11 | 2018-12-18 | Cohere Technologies, Inc. | Systems and methods for symplectic orthogonal time frequency shifting modulation and transmission of data |
US10090973B2 (en) | 2015-05-11 | 2018-10-02 | Cohere Technologies, Inc. | Multiple access in an orthogonal time frequency space communication system |
US9866363B2 (en) | 2015-06-18 | 2018-01-09 | Cohere Technologies, Inc. | System and method for coordinated management of network access points |
US10574317B2 (en) | 2015-06-18 | 2020-02-25 | Cohere Technologies, Inc. | System and method for providing wireless communication services using configurable broadband infrastructure shared among multiple network operators |
CN114070701B (en) | 2015-06-27 | 2024-05-14 | 凝聚技术股份有限公司 | OFDM compatible orthogonal time-frequency space communication system |
US10892547B2 (en) | 2015-07-07 | 2021-01-12 | Cohere Technologies, Inc. | Inconspicuous multi-directional antenna system configured for multiple polarization modes |
US10693581B2 (en) | 2015-07-12 | 2020-06-23 | Cohere Technologies, Inc. | Orthogonal time frequency space modulation over a plurality of narrow band subcarriers |
CN108770382B (en) | 2015-09-07 | 2022-01-14 | 凝聚技术公司 | Multiple access method using orthogonal time frequency space modulation |
WO2017087706A1 (en) | 2015-11-18 | 2017-05-26 | Cohere Technologies | Orthogonal time frequency space modulation techniques |
KR102655272B1 (en) | 2015-12-09 | 2024-04-08 | 코히어 테크놀로지스, 아이엔씨. | Pilot packing using complex orthogonal functions |
CN115694764A (en) | 2016-02-25 | 2023-02-03 | 凝聚技术公司 | Reference signal encapsulation for wireless communication |
EP3433969B1 (en) | 2016-03-23 | 2021-11-03 | Cohere Technologies, Inc. | Receiver-side processing of orthogonal time frequency space modulated signals |
US9667307B1 (en) | 2016-03-31 | 2017-05-30 | Cohere Technologies | Wireless telecommunications system for high-mobility applications |
CN117097594A (en) | 2016-03-31 | 2023-11-21 | 凝聚技术公司 | Channel acquisition using orthogonal time-frequency space modulated pilot signals |
EP3437279B1 (en) | 2016-04-01 | 2021-03-03 | Cohere Technologies, Inc. | Iterative two dimensional equalization of orthogonal time frequency space modulated signals |
KR102250054B1 (en) | 2016-04-01 | 2021-05-07 | 코히어 테크널러지스, 아이엔씨. | TOMLINSON-HARASHIMA precoding in OTFS communication system |
WO2017201467A1 (en) | 2016-05-20 | 2017-11-23 | Cohere Technologies | Iterative channel estimation and equalization with superimposed reference signals |
WO2018032016A1 (en) | 2016-08-12 | 2018-02-15 | Cohere Technologies | Localized equalization for channels with intercarrier interference |
EP3497799A4 (en) | 2016-08-12 | 2020-04-15 | Cohere Technologies, Inc. | Iterative multi-level equalization and decoding |
EP4362590A3 (en) | 2016-08-12 | 2024-06-26 | Cohere Technologies, Inc. | Method for multi-user multiplexing of orthogonal time frequency space signals |
US11310000B2 (en) | 2016-09-29 | 2022-04-19 | Cohere Technologies, Inc. | Transport block segmentation for multi-level codes |
WO2018064605A1 (en) | 2016-09-30 | 2018-04-05 | Cohere Technologies | Uplink user resource allocation for orthogonal time frequency space modulation |
EP3549200B1 (en) | 2016-12-05 | 2022-06-29 | Cohere Technologies, Inc. | Fixed wireless access using orthogonal time frequency space modulation |
WO2018129554A1 (en) | 2017-01-09 | 2018-07-12 | Cohere Technologies | Pilot scrambling for channel estimation |
WO2018140837A1 (en) | 2017-01-27 | 2018-08-02 | Cohere Technologies | Variable beamwidth multiband antenna |
US10568143B2 (en) | 2017-03-28 | 2020-02-18 | Cohere Technologies, Inc. | Windowed sequence for random access method and apparatus |
EP3610582A4 (en) | 2017-04-11 | 2021-01-06 | Cohere Technologies, Inc. | Digital communication using dispersed orthogonal time frequency space modulated signals |
EP4109983A1 (en) | 2017-04-21 | 2022-12-28 | Cohere Technologies, Inc. | Communication techniques using quasi-static properties of wireless channels |
EP3616265A4 (en) | 2017-04-24 | 2021-01-13 | Cohere Technologies, Inc. | Multibeam antenna designs and operation |
EP3616341A4 (en) | 2017-04-24 | 2020-12-30 | Cohere Technologies, Inc. | Digital communication using lattice division multiplexing |
US10055383B1 (en) | 2017-04-28 | 2018-08-21 | Hewlett Packard Enterprise Development Lp | Matrix circuits |
KR102612426B1 (en) | 2017-07-12 | 2023-12-12 | 코히어 테크놀로지스, 아이엔씨. | Data modulation technique based on ZAK transformation |
US10545821B2 (en) | 2017-07-31 | 2020-01-28 | Hewlett Packard Enterprise Development Lp | Fault-tolerant dot product engine |
US11546068B2 (en) | 2017-08-11 | 2023-01-03 | Cohere Technologies, Inc. | Ray tracing technique for wireless channel measurements |
WO2019036492A1 (en) | 2017-08-14 | 2019-02-21 | Cohere Technologies | Transmission resource allocation by splitting physical resource blocks |
CN111279337B (en) | 2017-09-06 | 2023-09-26 | 凝聚技术公司 | Wireless communication method implemented by wireless communication receiver device |
US11283561B2 (en) | 2017-09-11 | 2022-03-22 | Cohere Technologies, Inc. | Wireless local area networks using orthogonal time frequency space modulation |
WO2019055861A1 (en) | 2017-09-15 | 2019-03-21 | Cohere Technologies, Inc. | Achieving synchronization in an orthogonal time frequency space signal receiver |
EP3685470A4 (en) | 2017-09-20 | 2021-06-23 | Cohere Technologies, Inc. | Low cost electromagnetic feed network |
US11152957B2 (en) | 2017-09-29 | 2021-10-19 | Cohere Technologies, Inc. | Forward error correction using non-binary low density parity check codes |
EP4362344A2 (en) | 2017-11-01 | 2024-05-01 | Cohere Technologies, Inc. | Precoding in wireless systems using orthogonal time frequency space multiplexing |
WO2019113046A1 (en) | 2017-12-04 | 2019-06-13 | Cohere Technologies, Inc. | Implementation of orthogonal time frequency space modulation for wireless communications |
CN111542826A (en) * | 2017-12-29 | 2020-08-14 | 斯佩罗设备公司 | Digital architecture supporting analog coprocessors |
US11632270B2 (en) | 2018-02-08 | 2023-04-18 | Cohere Technologies, Inc. | Aspects of channel estimation for orthogonal time frequency space modulation for wireless communications |
US11489559B2 (en) | 2018-03-08 | 2022-11-01 | Cohere Technologies, Inc. | Scheduling multi-user MIMO transmissions in fixed wireless access systems |
WO2019241589A1 (en) | 2018-06-13 | 2019-12-19 | Cohere Technologies, Inc. | Reciprocal calibration for channel estimation based on second-order statistics |
US11522600B1 (en) | 2018-08-01 | 2022-12-06 | Cohere Technologies, Inc. | Airborne RF-head system |
KR20200058048A (en) | 2018-11-19 | 2020-05-27 | 삼성전자주식회사 | Semiconductor memory device and memory system having the same |
US11316537B2 (en) | 2019-06-03 | 2022-04-26 | Hewlett Packard Enterprise Development Lp | Fault-tolerant analog computing |
KR20220120859A (en) | 2021-02-24 | 2022-08-31 | 에스케이하이닉스 주식회사 | Apparatus and method for using an error correction code in a memory system |
US11811416B2 (en) * | 2021-12-14 | 2023-11-07 | International Business Machines Corporation | Energy-efficient analog-to-digital conversion in mixed signal circuitry |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050193320A1 (en) * | 2004-02-09 | 2005-09-01 | President And Fellows Of Harvard College | Methods and apparatus for improving performance of information coding schemes |
US20050204255A1 (en) * | 2004-03-12 | 2005-09-15 | Nan-Hsiung Yeh | Cyclic redundancy check based message passing in Turbo Product Code decoding |
US6961888B2 (en) * | 2002-08-20 | 2005-11-01 | Flarion Technologies, Inc. | Methods and apparatus for encoding LDPC codes |
US20050283707A1 (en) * | 2004-06-22 | 2005-12-22 | Eran Sharon | LDPC decoder for decoding a low-density parity check (LDPC) codewords |
US20060107181A1 (en) * | 2004-10-13 | 2006-05-18 | Sameep Dave | Decoder architecture system and method |
US7139959B2 (en) * | 2003-03-24 | 2006-11-21 | Texas Instruments Incorporated | Layered low density parity check decoding for digital communications |
US20060285852A1 (en) * | 2005-06-21 | 2006-12-21 | Wenze Xi | Integrated maximum a posteriori (MAP) and turbo product coding for optical communications systems |
US7162684B2 (en) * | 2003-01-27 | 2007-01-09 | Texas Instruments Incorporated | Efficient encoder for low-density-parity-check codes |
US20070011569A1 (en) * | 2005-06-20 | 2007-01-11 | The Regents Of The University Of California | Variable-rate low-density parity check codes with constant blocklength |
US20070011573A1 (en) * | 2005-05-27 | 2007-01-11 | Ramin Farjadrad | Method and apparatus for extending decoding time in an iterative decoder using input codeword pipelining |
US20070011586A1 (en) * | 2004-03-31 | 2007-01-11 | Belogolovy Andrey V | Multi-threshold reliability decoding of low-density parity check codes |
US7181676B2 (en) * | 2004-07-19 | 2007-02-20 | Texas Instruments Incorporated | Layered decoding approach for low density parity check (LDPC) codes |
US20070044006A1 (en) * | 2005-08-05 | 2007-02-22 | Hitachi Global Technologies Netherlands, B.V. | Decoding techniques for correcting errors using soft information |
US20070071009A1 (en) * | 2005-09-28 | 2007-03-29 | Thadi Nagaraj | System for early detection of decoding errors |
US20070124652A1 (en) * | 2005-11-15 | 2007-05-31 | Ramot At Tel Aviv University Ltd. | Method and device for multi phase error-correction |
US7237171B2 (en) * | 2003-02-26 | 2007-06-26 | Qualcomm Incorporated | Method and apparatus for performing low-density parity-check (LDPC) code operations using a multi-level permutation |
US20070147481A1 (en) * | 2005-12-22 | 2007-06-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Linear turbo equalization using despread values |
US20070162788A1 (en) * | 2003-12-30 | 2007-07-12 | Dignus-Jan Moelker | Method and device for calculating bit error rate of received signal |
US20070234184A1 (en) * | 2003-12-22 | 2007-10-04 | Qualcomm Incorporated | Methods and apparatus for reducing error floors in message passing decoders |
US20070234178A1 (en) * | 2003-02-26 | 2007-10-04 | Qualcomm Incorporated | Soft information scaling for interactive decoding |
US7313752B2 (en) * | 2003-08-26 | 2007-12-25 | Samsung Electronics Co., Ltd. | Apparatus and method for coding/decoding block low density parity check code in a mobile communication system |
US20080082868A1 (en) * | 2006-10-02 | 2008-04-03 | Broadcom Corporation, A California Corporation | Overlapping sub-matrix based LDPC (low density parity check) decoder |
US20080104485A1 (en) * | 2005-01-19 | 2008-05-01 | Mikhail Yurievich Lyakh | Data Communications Methods and Apparatus |
US20080109701A1 (en) * | 2006-10-30 | 2008-05-08 | Motorola, Inc. | Turbo Interference Suppression in Communication Systems |
US20080126910A1 (en) * | 2006-06-30 | 2008-05-29 | Microsoft Corporation | Low dimensional spectral concentration codes and direct list decoding |
US20080134000A1 (en) * | 2006-11-30 | 2008-06-05 | Motorola, Inc. | Method and apparatus for indicating uncorrectable errors to a target |
US20080148129A1 (en) * | 2006-12-14 | 2008-06-19 | Regents Of The University Of Minnesota | Error detection and correction using error pattern correcting codes |
US20080163032A1 (en) * | 2007-01-02 | 2008-07-03 | International Business Machines Corporation | Systems and methods for error detection in a memory system |
US20080235561A1 (en) * | 2007-03-23 | 2008-09-25 | Quantum Corporation | Methodology and apparatus for soft-information detection and LDPC decoding on an ISI channel |
US20080276156A1 (en) * | 2007-05-01 | 2008-11-06 | Texas A&M University System | Low density parity check decoder for regular ldpc codes |
US7607075B2 (en) * | 2006-07-17 | 2009-10-20 | Motorola, Inc. | Method and apparatus for encoding and decoding data |
US20090273492A1 (en) * | 2008-05-02 | 2009-11-05 | Lsi Corporation | Systems and Methods for Queue Based Data Detection and Decoding |
US20100042890A1 (en) * | 2008-08-15 | 2010-02-18 | Lsi Corporation | Error-floor mitigation of ldpc codes using targeted bit adjustments |
US20100042806A1 (en) * | 2008-08-15 | 2010-02-18 | Lsi Corporation | Determining index values for bits of a binary vector |
US20100180176A1 (en) * | 2006-08-31 | 2010-07-15 | Panasonic Corporation | Encoding method, encoder, and transmitter |
US7856579B2 (en) * | 2006-04-28 | 2010-12-21 | Industrial Technology Research Institute | Network for permutation or de-permutation utilized by channel coding algorithm |
US7934139B2 (en) * | 2006-12-01 | 2011-04-26 | Lsi Corporation | Parallel LDPC decoder |
-
2009
- 2009-12-22 US US12/644,181 patent/US8359515B2/en not_active Expired - Fee Related
- 2009-12-22 US US12/644,161 patent/US8352847B2/en not_active Expired - Fee Related
Patent Citations (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6961888B2 (en) * | 2002-08-20 | 2005-11-01 | Flarion Technologies, Inc. | Methods and apparatus for encoding LDPC codes |
US7162684B2 (en) * | 2003-01-27 | 2007-01-09 | Texas Instruments Incorporated | Efficient encoder for low-density-parity-check codes |
US20070234178A1 (en) * | 2003-02-26 | 2007-10-04 | Qualcomm Incorporated | Soft information scaling for interactive decoding |
US7237171B2 (en) * | 2003-02-26 | 2007-06-26 | Qualcomm Incorporated | Method and apparatus for performing low-density parity-check (LDPC) code operations using a multi-level permutation |
US7139959B2 (en) * | 2003-03-24 | 2006-11-21 | Texas Instruments Incorporated | Layered low density parity check decoding for digital communications |
US7313752B2 (en) * | 2003-08-26 | 2007-12-25 | Samsung Electronics Co., Ltd. | Apparatus and method for coding/decoding block low density parity check code in a mobile communication system |
US20070234184A1 (en) * | 2003-12-22 | 2007-10-04 | Qualcomm Incorporated | Methods and apparatus for reducing error floors in message passing decoders |
US20070162788A1 (en) * | 2003-12-30 | 2007-07-12 | Dignus-Jan Moelker | Method and device for calculating bit error rate of received signal |
US20050193320A1 (en) * | 2004-02-09 | 2005-09-01 | President And Fellows Of Harvard College | Methods and apparatus for improving performance of information coding schemes |
US20050204255A1 (en) * | 2004-03-12 | 2005-09-15 | Nan-Hsiung Yeh | Cyclic redundancy check based message passing in Turbo Product Code decoding |
US20070011586A1 (en) * | 2004-03-31 | 2007-01-11 | Belogolovy Andrey V | Multi-threshold reliability decoding of low-density parity check codes |
US20050283707A1 (en) * | 2004-06-22 | 2005-12-22 | Eran Sharon | LDPC decoder for decoding a low-density parity check (LDPC) codewords |
US7181676B2 (en) * | 2004-07-19 | 2007-02-20 | Texas Instruments Incorporated | Layered decoding approach for low density parity check (LDPC) codes |
US20060107181A1 (en) * | 2004-10-13 | 2006-05-18 | Sameep Dave | Decoder architecture system and method |
US20080104485A1 (en) * | 2005-01-19 | 2008-05-01 | Mikhail Yurievich Lyakh | Data Communications Methods and Apparatus |
US20070011573A1 (en) * | 2005-05-27 | 2007-01-11 | Ramin Farjadrad | Method and apparatus for extending decoding time in an iterative decoder using input codeword pipelining |
US20070011569A1 (en) * | 2005-06-20 | 2007-01-11 | The Regents Of The University Of California | Variable-rate low-density parity check codes with constant blocklength |
US20060285852A1 (en) * | 2005-06-21 | 2006-12-21 | Wenze Xi | Integrated maximum a posteriori (MAP) and turbo product coding for optical communications systems |
US20070044006A1 (en) * | 2005-08-05 | 2007-02-22 | Hitachi Global Technologies Netherlands, B.V. | Decoding techniques for correcting errors using soft information |
US20070071009A1 (en) * | 2005-09-28 | 2007-03-29 | Thadi Nagaraj | System for early detection of decoding errors |
US20070124652A1 (en) * | 2005-11-15 | 2007-05-31 | Ramot At Tel Aviv University Ltd. | Method and device for multi phase error-correction |
US20070147481A1 (en) * | 2005-12-22 | 2007-06-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Linear turbo equalization using despread values |
US7856579B2 (en) * | 2006-04-28 | 2010-12-21 | Industrial Technology Research Institute | Network for permutation or de-permutation utilized by channel coding algorithm |
US20080126910A1 (en) * | 2006-06-30 | 2008-05-29 | Microsoft Corporation | Low dimensional spectral concentration codes and direct list decoding |
US7607075B2 (en) * | 2006-07-17 | 2009-10-20 | Motorola, Inc. | Method and apparatus for encoding and decoding data |
US20100180176A1 (en) * | 2006-08-31 | 2010-07-15 | Panasonic Corporation | Encoding method, encoder, and transmitter |
US20080082868A1 (en) * | 2006-10-02 | 2008-04-03 | Broadcom Corporation, A California Corporation | Overlapping sub-matrix based LDPC (low density parity check) decoder |
US20080109701A1 (en) * | 2006-10-30 | 2008-05-08 | Motorola, Inc. | Turbo Interference Suppression in Communication Systems |
US20080134000A1 (en) * | 2006-11-30 | 2008-06-05 | Motorola, Inc. | Method and apparatus for indicating uncorrectable errors to a target |
US7934139B2 (en) * | 2006-12-01 | 2011-04-26 | Lsi Corporation | Parallel LDPC decoder |
US20080148129A1 (en) * | 2006-12-14 | 2008-06-19 | Regents Of The University Of Minnesota | Error detection and correction using error pattern correcting codes |
US20080163032A1 (en) * | 2007-01-02 | 2008-07-03 | International Business Machines Corporation | Systems and methods for error detection in a memory system |
US20080235561A1 (en) * | 2007-03-23 | 2008-09-25 | Quantum Corporation | Methodology and apparatus for soft-information detection and LDPC decoding on an ISI channel |
US20080276156A1 (en) * | 2007-05-01 | 2008-11-06 | Texas A&M University System | Low density parity check decoder for regular ldpc codes |
US20080301521A1 (en) * | 2007-05-01 | 2008-12-04 | Texas A&M University System | Low density parity check decoder for irregular ldpc codes |
US20090273492A1 (en) * | 2008-05-02 | 2009-11-05 | Lsi Corporation | Systems and Methods for Queue Based Data Detection and Decoding |
US20100042890A1 (en) * | 2008-08-15 | 2010-02-18 | Lsi Corporation | Error-floor mitigation of ldpc codes using targeted bit adjustments |
US20100042806A1 (en) * | 2008-08-15 | 2010-02-18 | Lsi Corporation | Determining index values for bits of a binary vector |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160306699A1 (en) * | 2012-04-25 | 2016-10-20 | International Business Machines Corporation | Encrypting data for storage in a dispersed storage network |
US10042703B2 (en) * | 2012-04-25 | 2018-08-07 | International Business Machines Corporation | Encrypting data for storage in a dispersed storage network |
US10621044B2 (en) | 2012-04-25 | 2020-04-14 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US10795766B2 (en) | 2012-04-25 | 2020-10-06 | Pure Storage, Inc. | Mapping slice groupings in a dispersed storage network |
US11669397B2 (en) | 2012-04-25 | 2023-06-06 | Pure Storage, Inc. | Partial task processing with data slice errors |
EP3047575B1 (en) * | 2013-09-19 | 2020-01-01 | u-blox AG | Encoding of multiple different quasi-cyclic low-density parity check (qc-ldpc) codes sharing common hardware resources |
WO2015127426A1 (en) * | 2014-02-24 | 2015-08-27 | Qatar Foundation For Education, Science And Community Development | Apparatus and method for secure communication on a compound channel |
US10015011B2 (en) | 2014-02-24 | 2018-07-03 | Qatar Foundation For Education, Science And Community Development | Apparatus and method for secure communication on a compound channel |
WO2018132444A1 (en) * | 2017-01-11 | 2018-07-19 | Groq, Inc. | Error correction in computation |
US11461433B2 (en) | 2017-01-11 | 2022-10-04 | Groq, Inc. | Error correction in computation |
Also Published As
Publication number | Publication date |
---|---|
US8352847B2 (en) | 2013-01-08 |
US8359515B2 (en) | 2013-01-22 |
US20110131463A1 (en) | 2011-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8352847B2 (en) | Matrix vector multiplication for error-correction encoding and the like | |
US20230336189A1 (en) | Low density parity check decoder | |
US10511326B2 (en) | Systems and methods for decoding error correcting codes | |
KR101405962B1 (en) | Method of performing decoding using LDPC code | |
US11115051B2 (en) | Systems and methods for decoding error correcting codes | |
EP1829223B1 (en) | Parallel, layered decoding for Low-Density Parity-Check (LDPC) codes | |
US7127659B2 (en) | Memory efficient LDPC decoding methods and apparatus | |
US8468429B2 (en) | Reconfigurable cyclic shifter | |
KR101742451B1 (en) | Encoding device, decoding device, encoding method and decoding method | |
JP4320418B2 (en) | Decoding device and receiving device | |
JP4519694B2 (en) | LDPC code detection apparatus and LDPC code detection method | |
US8572463B2 (en) | Quasi-cyclic LDPC encoding and decoding for non-integer multiples of circulant size | |
EP3110009A1 (en) | Encoding method, decoding method, encoding device and decoding device for structured ldpc | |
US9356623B2 (en) | LDPC decoder variable node units having fewer adder stages | |
US20110173510A1 (en) | Parallel LDPC Decoder | |
JP2014099944A (en) | Methods and apparatus for low-density parity check decoding using hardware sharing and serial sum-product architecture | |
JP2009152655A (en) | Decoding device, decoding method, receiver, and storage medium reproducing apparatus | |
JP4832447B2 (en) | Decoding apparatus and method using channel code | |
Xie et al. | Quantum synchronizable codes from quadratic residue codes and their supercodes | |
EP3496277A1 (en) | Parallel encoding method and system for protograph-based ldpc codes with hierarchical lifting stages | |
TWI523437B (en) | Encoding and syndrome computing co-design circuit for bch code and method for deciding the same | |
JP2010028408A (en) | Information processing apparatus, information processing method, and program | |
JP2010041628A (en) | Encoder, encoding method, and encoding program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LSI CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GUNNAM, KIRAN;REEL/FRAME:023686/0661 Effective date: 20091221 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AG Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:LSI CORPORATION;AGERE SYSTEMS LLC;REEL/FRAME:032856/0031 Effective date: 20140506 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LSI CORPORATION;REEL/FRAME:035390/0388 Effective date: 20140814 |
|
AS | Assignment |
Owner name: LSI CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039 Effective date: 20160201 Owner name: AGERE SYSTEMS LLC, PENNSYLVANIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039 Effective date: 20160201 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047230/0133 Effective date: 20180509 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 09/05/2018 PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0133. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047630/0456 Effective date: 20180905 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: BROADCOM INTERNATIONAL PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED;REEL/FRAME:053771/0901 Effective date: 20200826 |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210108 |